id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
14173290 | pes2o/s2orc | v3-fos-license | A two-stage inter-rater approach for enrichment testing of variants associated with multiple traits
Shared genetic aetiology may explain the co-occurrence of diseases in individuals more often than expected by chance. On identifying associated variants shared between two traits, one objective is to determine whether such overlap may be explained by specific genomic characteristics (eg, functional annotation). In clinical studies, inter-rater agreement approaches assess concordance among expert opinions on the presence/absence of a complex disease for each subject. We adapt a two-stage inter-rater agreement model to the genetic association setting to identify features predictive of overlap variants, while accounting for their marginal trait associations. The resulting corrected overlap and marginal enrichment test (COMET) also assesses enrichment at the individual trait level. Multiple categories may be tested simultaneously and the method is computationally efficient, not requiring permutations to assess significance. In an extensive simulation study, COMET identifies features predictive of enrichment with high power and has well-calibrated type I error. In contrast, testing for overlap with a single-trait enrichment test has inflated type I error. COMET is applied to three glycaemic traits using a set of functional annotation categories as predictors, followed by further analyses that focus on tissue-specific regulatory variants. The results support previous findings that regulatory variants in pancreatic islets are enriched for fasting glucose-associated variants, and give insight into differences/similarities between characteristics of variants associated with glycaemic traits. Also, despite regulatory variants in pancreatic islets being enriched for variants that are marginally associated with fasting glucose and fasting insulin, there is no enrichment of shared variants between the traits.
INTRODUCTION
Apparent links between disease susceptibilities may be explained by shared genetic aetiology, such that a variant may be associated with multiple traits. Besides identifying shared associated variants, a further objective is to determine whether the overlap of associated variants between the traits may be related to SNP (or trait × SNP)-specific characteristics. Identification of specific characteristics that are predictive of overlap enables refinement of the set of variants in further searches for predisposing variants of both traits. Moreover, Bayesian priors may be defined such that a SNP belonging to a predictive category has a higher prior probability of association than SNPs outside that category; priors may also be allowed to differ so that the prior probability increases with the number of predictive categories that the SNP belongs to. The overall purpose of the proposed method, corrected overlap and marginal enrichment test (COMET), is to determine whether agreement (overlap) between the verdicts of association between a SNP and a phenotype can be related to SNPspecific (eg, functional annotation) or trait × SNP-specific characteristics, such as membership of known biological pathways.
Several existing methods address similar, but distinctive, objectives; for example, GoShifter, 1 genetic analysis incorporating pleiotropy and annotation (GPA), 2 and a method implemented in database for annotation, visualisation and integrated discovery (DAVID). 3 All of these methods assess enrichment of annotations among traitassociated variants and, on application to shared variants between different traits, do not account for marginal enrichment of individual traits. Testing for annotation enrichment within trait-associated SNPs is the reverse of the proposed objective of testing for enrichment of trait-associated variants within annotations. In the latter, the number of associated variants is treated as the random variable, which aligns with the perception that we observe a number of associated variants and there are more to discover. In contrast, testing for annotation enrichment in a set of associated SNPs fixes the number of associations found and assesses annotation status among them; the annotation status is treated as the random variable in that approach.
With regards to overlap enrichment extensions, any of the singletrait enrichment methods may be extended by considering the set of SNPs associated with two traits. However, this does not automatically account for enrichment due to chance, as the marginal distributions of the individual traits are not accounted for. The GPA approach uses annotation information to increase the statistical power to identify risk variants. The authors of the method recommend caution in interpreting the enrichment testing approach of GPA with respect to overlap variants, as a significant P-value may be due to marginal enrichments. 2 GoShifter uses a computationally intensive permutation approach 1 and the test implemented in DAVID involves calculation of a hypergeometric probability. 3 We apply DAVID to test for enrichment among shared variants, rather than variants associated with a single trait and demonstrate that it has an increased type I error rate.
Owing to the inflated error, power comparisons are not carried out with DAVID.
COMET requires only summary statistics and is applicable to casecontrol or quantitative trait studies that may or may not have overlapping individuals. Simulations demonstrate that any degree of overlap between studies does not inflate the type I error for detection of SNP characteristics that are predictive of concordant associations between the traits. As COMET only requires fitting several linear models and does not depend on permutations to assess significance, it is computationally efficient. The data only needs to be clumped once, and then may be quickly analysed with any set of covariates. On a Linux (64 bit) machine with X86-64 architecture, 32 cores, and 2 × 2.1 Ghz 12 core AMD 6272 CPU, on data that has already been clumped, COMET is able to run for one pair of traits and one set of five covariates in 3 min, 44 s for our data application, where the fitting of the models takes 36 s.
There is flexibility in the covariates that may be incorporated in the analysis, leading to a range of potential applications. Before our real data application, we first examine the potential for a set of functional annotation covariates to differentiate between associated variants (with Po5 × 10 − 6 , as given by the National Human Genome Research Institute (NHGRI) Genome-Wide Association Study (GWAS) catalogue 4 ) for 14 different diseases/traits. COMET is then employed with these covariates to assess whether any annotation class is enriched for variants associated with fasting insulin, fasting glucose or 2-h glucose, or enriched for shared associations between any pair of the three glycaemic traits (from the Meta-Analyses of Glucose and Insulinrelated traits Consortium; MAGIC). As more genome-wide significant loci have been identified for the glucose traits than for fasting insulin, 5 an objective is to determine whether there are certain characteristics that are enriched for variants associated with either or both traits; such features may then be used for refinement of searches for further associated variants. On the basis of our results, we proceed with further analyses using COMET to test for enrichment of trait(s)associated variants within tissue-specific regulatory regions. The software for COMET is freely available at http://www.sanger.ac.uk/ science/tools/comet.
MATERIALS AND METHODS
Studies of agreement are common in clinical studies and psychiatric research, where one is often interested in the agreement among expert/rater opinions. A special case is when the opinion/rating is a dichotomous outcome, such as a diagnosis. Inter-rater agreement approaches give a measure of the concordance between two raters (eg, physicians) that make a verdict or pronouncement (eg, disease presence/absence) on the same subject, and adjust for agreement between raters that may occur simply due to chance. A two-stage inter-rater agreement model identifies covariate categories containing more concordance/ discordance in verdicts than expected by chance, accounting for the marginal rater opinions. 6 We adapt this model to the genetic association setting to identify features predictive of shared associations at a SNP, accounting for the marginal trait associations; each 'subject' corresponds to a SNP, whereas each 'rater' corresponds to a trait. It may also be used to assess features predictive of association for individual traits.
At each genetic variant, a binary variable is defined for each trait corresponding to evidence of association with the trait, based on a prespecified significance threshold; this corresponds to the verdict of each rater. Analogous to comparing measurements taken by raters on the same individuals, we compare measurements of trait-association at each SNP. Rather than considering agreement for both traits (ie, either having or not having association evidence at the same SNP), we focus only on both traits having association evidence, as lack of association evidence does not imply that the association does not exist (eg, due to lack of power).
Evidence of association for each trait with each SNP may be defined according to P-values or Bayes' factors (BFs). We focus on BFs, as BFs may be easily computed from summary statistics 7 and have several advantages over P-values in the comparison of multiple studies. 8 In both our simulations and data application, we used a Bayesian threshold of log 10 (ABF)40.695 (based on threshold settings R = 20, π 0 = 0.99), corresponding to a P-value threshold of 0.004-0.01, depending on the study size; 8 see Supplementary Information for BF details.
Model
We consider SNP-specific and/or trait × SNP-specific covariates based on prior genetic information such as biological annotations. Covariate categories may then be tested for enrichment of (marginal and/or shared) associated variants. As the inter-rater methods assume independent subjects (with subjects here corresponding to SNPs), we first prune (r 2 40.1) the set of SNPs (minor allele frequency (MAF) 45%) that comprise the GWAS data for each trait. The MAF threshold of 5% was chosen as we focus on GWAS results, though in application to other data sets (eg, large samples of exome data) lower MAF variants may be included. SNPs are clumped using r 2 40.1 to satisfy the independence assumption required for the regression models. We make use of a joint association metric that accounts for the significance of a SNP with respect to each trait, maximising the retention of SNPs associated with multiple traits, rather than SNPs with high association evidence with one trait and not with the other 8 (see Supplementary Information for details).
Let x i be a vector of SNP-specific covariates, x ir be a vector of SNP-traitspecific covariates, Y ir = 1 (evidence of association at SNP i for trait r); r = 1, 2, and p ir = Pr(Y ir = 1|x i , x ir ); r = 1, 2; i = 1,...,m. In the inter-rater model, 6 agreement between the raters at subject i would be defined as . Instead, we focus on the concordance of associated SNPs, and therefore consider Y i = Y i1 Y i2 . The marginal models for conditional probability of a detected association given a particular trait (r = 1, 2) are: The intercept term γ 0r is the baseline probability of association, accounting for the probability of association that is not attributable to any of the covariates. An effect estimate that meets the significance threshold (eg, 0.05) and is positive suggests that SNPs within the coinciding covariate category tend to be associated with the trait (ie, positive enrichment); negative enrichment is present if the significant effect estimate is below zero. Collectively, this model tests for covariate categories that are predictive of SNP-trait associations. These marginal models are first fit independently for each trait, then the fitted models are used to obtain estimates of the log-odds of chance overlap termẐ ¼ logitp i1pi2 ð Þ, which accounts for chance overlap, assuming that the probabilities of association at each trait are independent (if modelling agreement rather than concordance of association one would instead havê ). This term is then used as an offset term in the model for the probability of overlapping associations (or agreement): If overlap is due to chance alone, then all covariate effect estimates are not significantly different from zero and the probability of overlap is simply the product of the marginal probabilities, logit À1Ẑ i ð Þ. This observation helps us make inferences on the features of SNPs for which there is an enrichment of overlapping associations. A statistically significant intercept term β 0 would be suggestive of more agreement than expected by chance that is not accounted for by any of the covariates. For instance, if SNPs associated with one trait tend to be associated with the other trait, but this sharing of associations is not related to any of the covariates, then the intercept term would account for this agreement. This framework may easily be extended to identify predictive features of shared SNPs for R traits by defining agreement at SNP i as In our particular application to three glycaemic traits, there were only six SNPs that were shared between all three traits. Therefore, little inference could be made on the features of this small set of SNPs, and we proceeded by applying COMET to each pair of traits.
The traits may be from studies composed of disjoint sets of individuals or possibly from studies that share some individuals in common. In particular, for two quantitative traits, measurements for both traits may be taken on a portion of individuals. In the usual inter-rater set-up, different raters have correlated responses by the nature of rating the same subject, which is akin to correlation between trait associations expected in the presence of shared individuals, when testing at a certain SNP. This may influence the overall probability of concordance between the ratings but, intuitively, although this will affect the intercept term, this should not affect the tests of whether or not any of the covariates explain the concordance in the ratings. In the scenario of two casecontrol studies, there is the possibility of shared individuals between the control sets of the two studies. These shared controls may influence the individual SNP association tests, but by similar reasoning to the quantitative traits case, only the intercept term is expected to experience an impact. On a similar note, the traits may be correlated (eg, height and birth weight) or linked through a phenotypic derivation (eg, height and kg/m 2 ), as the offset term accounts for each of the marginal distributions when testing for enrichment among shared variants.
Full marginal models for p ir are recommended, such that any covariates that are considered for inclusion in the overlap model are included in each marginal model. This prevents spurious results in the overlap model for p i , asp ir are needed to estimate the offset term. 6 In the final overlap model, covariates of categories containing no overlap SNPS are removed.
It has been noted that the variance estimates for each coefficient of the model for p i assume that the offset term is known rather than estimated, so that alternative approximation techniques such as the jackknife are suggested. 6 A jackknife estimate of the variance may be obtained by a leave-one-out procedure in which each subject (SNP) is removed and the two-stage models are fit to the data with one fewer subjects. However, as there are a large number of SNPs, there are negligible changes to the fitted models at the removal of each individual SNP. Therefore, for computational efficiency, we make use of the resulting coefficient estimates and standard errors from the model based on a known offset term. A flow chart for COMET is given in Figure 1.
Covariates
Various SNP-specific covariates may be used to inform about overlap between traits, allowing flexibility in use of the method. A set of possible SNP-specific covariates is listed in Table 1, which is a modification of categories that have previously been considered when making use of prior knowledge for prioritising SNPs for follow-up. 9 Covariate categories that each SNP is positive for are determined by the Variant Effect Predictor (VEP, v81) of Ensembl, 10 which outputs all consequences of each variant on the protein sequence and gene expression, across all transcripts for the gene, so that a SNP may be positive for multiple covariate categories. As a reference to the general features of SNPs, we examine the distribution of SNPs from the 1000 Genomes CEU samples, phase 3 release. 11 On pruning the common SNPs (MAF40.05) on r 2 40.1 (using PLINK v1.07), there are 208 780 approximately independent variants. Table 1 provides the proportion of these SNPs that belong to each of the covariate categories, as well as the coinciding proportions for unpruned common SNPs. These proportions show a close correspondence, suggesting that the pruned SNPs reflect the overall distribution seen in the common SNPs in CEU of 1000 Genomes.
Simulations
Each simulation is based on 208 780 approximately independent SNPs that remain after pruning the common SNPs on r 2 40.1 in the 1000 Genomes CEU samples. Functional annotations for these SNP are obtained from VEP (v79). We focus on models that include five SNP-specific covariates that are listed in Table 1, namely Q1, Q2, Q3, Q5 and Q6 that are positive in 51.5%, 0.39%, 0.54%, 1.40%, and 64.1% of SNPs, respectively; Q4 is not included in the models as o0.025% of the pruned SNPs fall within this category. Several technical details regarding differences between these simulation proportions and those of Table 1 are detailed in the Supplementary Information.
For assessment of power, only one of the five covariate categories (Q1 or Q5) is set as enriched for overlapping associations between the traits, though this does not restrict causal SNPs from belonging to other categories. We consider various proportions p 12 ' of variants that are associated with both traits and belong to the enriched category. The overall proportion of overlap variants is denoted by p 12 , whereas the marginal proportions of SNPs associated with traits 1 and 2 are given by p 1 and p 2 , respectively. The simulation algorithm, parameter selection, and technical details are given in the Supplementary Information. For each parameter setting, we run 1000 replications to approximate type I errors and power. Type I errors are approximated from simulations that do not assign enrichment to any of the covariate categories, such that overlapping variants are present and there is no restriction on their allocation to covariate categories; this mimics the natural distribution of SNPs among the covariate categories. For further assessment of any inflation, we also consider QQ-plots of the standardised effect estimates compared with a standard normal distribution, as well as inflation factors (calculated from the median of χ 2 distribution). As a comparison, type I errors for enrichment testing of overlap variants are also determined via the DAVID software. 3
Real data application
Before applying COMET to real data, we considered the distribution of the covariates among variants that are associated with fourteen traits/diseases. This pre-assessment illustrated that there is potential for the covariates to differentiate between trait-associated variants for different traits, as well as potential for identifying covariates that may be enriched for shared variants. Details and results on these comparisons are given in the Supplementary Information and in Supplementary Figure S5.
COMET was applied with the set of five functional annotation covariates to each pair of fasting insulin, fasting glucose and 2-h glucose, which were all measured on non-diabetic European-ancestry individuals (from MAGIC). The summary statistics from these glycaemic traits were downloaded from www. magicinvestigators.org and details on this dataset are provided in the Supplementary Information. Rather than restricting certain covariates to tests of positive enrichment (due to small covariate proportions) and others to twosided tests (of positive or negative enrichment) in the overlap model, we simplify the presentation and focus only on positive enrichment. We further demonstrate how COMET could be used to explore regulatory annotation in greater depth by making use of an extensive database on regulatory information, RegulomeDB, which covers over 100 tissue and cell lines. 12 In RegulomeDB, known and predicted regulatory DNA elements include regions of DNase hypersensitivity, binding sites of transcription factors, and promoter regions that have been characterised to regulation transcription.
Of particular interest are tissues that are involved in metabolism, i.e. pancreas, liver, cardiac muscle, skeletal muscle, and adipose tissues. Pancreatic islet cells are central in the pathogenesis of type 2 diabetes (T2D) and active islet enhancer clusters have been demonstrated to be enriched in T2D riskassociated and fasting glucose-associated variants. 13 In addition, liver, adipose tissue, and skeletal and cardiac muscles develop insulin resistance as defence against damage from an excess nutrient load. 14 Owing to the likely collinearity between the tissue-specific regulatory covariates, we ran separate models including one regulatory covariate annotated by RegulomeDB, for several filtrations on the tissue type(s); details of the specific cell/tissue lines within each tissue group are provided in the Supplementary Information. Initially, eight models were considered: one for each of the five metabolism-involved tissues, liver cancer (as a tissue that is involved in metabolism, but cancerous so may/may not be enriched for glycaemic trait-associated variants), the union of the five metabolism-involved tissues, and the collection of all tissues available in RegulomeDB. As the pancreatic tissue group consists of tissues from both pancreatic islets and the pancreatic duct, we also compared our results when only pancreatic islets are included. The respective proportions of pruned variants (r 2 o0.1) that are regulatory in each tissue type are 0.0768 (pancreas), 0.0666 (pancreatic islets only), 0.0779 (liver), 0.0275 (cardiac muscle), 0.116 (skeletal muscle), 0.0012 (adipose), and 0.0955 (liver cancer). On considering all (5) tissues involved in metabolism, the proportion is 0.166, or 0.162 if pancreatic duct tissues are excluded. Among all available tissues, the proportion of regulatory variants is 0.693.
Simulation study
Two equal-sized case-control studies were generated, where study r (for trait r; r = 1, 2) is composed of N r cases and N r controls; we consider study 1 with N 1 = 3000 each of cases and controls and study 2 with N 2 = 5000 each of cases and controls, as well as (N 1 , N 2 ) taking values (5000, 10 000) and (10 000, 20 000). In our null simulations, the proportions of trait-associated variants for trait 1 (marginal), trait 2 (marginal) and shared between them are, respectively, p 1 = 0.04, p 2 = 0.02 and p 12 = 5 × 10 − 4 . For all five covariates, both sets of standardised effect estimates from the marginal models display a close alignment with the standard normal distribution (eg, see Supplementary Figure S1). The coinciding inflation factors for covariates Q1, Q2, Q3, Q5, and Q6 are, respectively, 1.07, 1.19, 1.09, 0.97, and 1.08, which are not substantially over-inflated, though the smallest category Q2 (containing o0.5% of the variants), appears to be most inflated.
For detecting positive enrichment of overlap variants at significance level α = 0.05, type I error estimates for COMET and DAVID are given in Table 2. The type I errors of DAVID are consistently higher than those based on COMET, and the 95% confidence intervals for the three categories with fewer than 2% of the variants (Q2, Q3, Q5) are well above 0.05. COMET has a better controlled type I error rate, as the 95% confidence intervals contain 0.05 or have an upper bound that is slightly below it.
Positive-enrichment overlap tests with COMET are well-calibrated for all covariates, though tests for negative enrichment are less wellcalibrated for covariates Q2, Q3, and Q5 (eg, see Figure 2). As Q2, Q3, and Q5 harbour fewer than 2% of the variants, this proportion substantially decreases when we make the additional restriction that variants are detected as overlap variants. Consequently, approximately half of the simulations result in either an empty set of overlap variants in the covariate category, so that the covariate is excluded from the final overlap model, or a negative effect estimate that is not significantly different from 0; this behaviour is illustrated in the QQ-plots. The inflation factors for Q1 and Q6 are 0.83 and 0.93, while inflation factors calculated from the positive standardised statistics for Q2, Q3, and Q5 are 1.46, 0.62, and 1.05. In summary, one-sided tests for positive enrichment are well-calibrated for all covariates. There is inflation for Q2 and deflation for Q3, which, respectively, contain 0.39% and 0.54% of the variants, suggesting that the type I error rate is not controlled very well when fewer than 1% of the variants are positive for the covariate. In addition, two-sided tests for enrichment in either direction may be tested for in the larger categories, Q1 and Q6.
For assessment of power, we considered each of Q5 (1.4% of variants) and Q1 (51.5% of variants) as being enriched for overlap, so that any impact of the category proportions may also be assessed. Covariate categories that are not designed as enriched for overlap each give additional type I error results and can be averaged over the simulation settings for each covariate (Supplementary Table S1); individual results for all coefficients are given in Supplementary Tables S2 and S3. The average error rates shown in Supplementary Table S1 appear to have more stability than the individual rates.
For power assessment, the proportion of overlap causal variants that fall within Q5 was assigned values from 5 to 50% (Figure 3; Supplementary Table S4). For (N 1 , N 2 ) set at (5000, 10 000) or (10 000, 20 000), the detection power is close to 100% at 20% enrichment, and is high at 10% enrichment; high power near 80% is attained for (3000, 5000) when there is at least 10% enrichment. The enrichment setting of p′ 12 = 7 × 10 − 6 corresponds to the null hypothesis of no enrichment (see Supplementary Information for details), and the respective type I error estimates are 0.045, 0.039, and 0.035 for increasing study sizes. Results for Q1 in the case-control setting and all quantitative trait results are shown in the Supplementary Information.
Application to glycaemic traits
Results of the positive enrichment tests from COMET applied to fasting glucose (FG), fasting insulin (FI) and 2-h glucose (2G) are given in Table 3. Among potentially deleterious SNPS (0.67% of pruned common variants), enrichment of overlap variants is detected for FG-2G (two variants) and for FI-2G (one variant); see Table 3.
In addition, SNPs in mature miRNAs that have a regulatory effect (ie, that are transcribed, though not translated) tend to be enriched for variants associated with each of the three glycaemic traits. Nonetheless, there are not more shared variants than expected by chance, considering these marginal enrichments; Our results also indicate that there is positive enrichment of variants associated with FG and with FG-2G among SNPs that overlap potentially regulatory or regulatory regions. Consequently, we tested tissue-specific regulatory annotations for positive enrichment in an additional analysis.
Tissue-specific analysis of glycaemic traits
Results for tissue-specific analyses are shown in Table 4. Enrichment in adipose tissue is not detected, as it only contains 0.12% of the variants. Regulatory variants in pancreas tissues (and only pancreatic islets) are enriched for marginal associations with FG, FI, and 2G, as well as FG-2G shared variants, though they do not contain more FG-FI variants than would be expected by chance (Table 4). Analysis without accounting for the marginal distributions can be obtained by excluding the offset term, resulting in a reduction of the P-value to 0.044 (pancreas tissues), suggesting enrichment. This illustrates that marginal predictive factors are not necessarily predictive of overlap variants, with the offset term able to account for any perceived overlap that may in fact be due to chance. FI and FG associated variants are enriched in liver tissue regulatory variants, though 2G variants are not. COMET also detected that regulatory variants in cardiac muscle are enriched for FG and those in cardiac and skeletal muscle are each enriched for the FG-2G overlap.
Considering the five metabolic tissues collectively, there is enrichment of each individual trait, as well FG-2G, though these signals Table 2 Estimates of type I error (including 95% confidence intervals) for the detection as a category positively enriched with overlap signals at coefficient significance level 0.05, for the null enrichment setting with equal-sized cases and controls disappear when all available tissues are considered collectively. There is an absence of FI-FG enrichment signals in tissue-specific analyses and the collective tissue analysis suggests enrichment, but such overlap variants are regulatory in a range of tissues that may be contributing to the signal. The FI-FG SNPs (GRCh37/hg19 assembly) that are regulatory in at least one metabolism-involved tissue are listed in Supplementary Table S8, together with their nearest gene and associated phenotypes. In Supplementary Table S9, analogous information is given for the FI-FG overlap SNPs that are only regulatory in a tissue that is not involved in metabolism, such as tissues from cancerous liver, blood (cancerous and normal), cerebellum, skin, and bone marrow.
DISCUSSION
We have proposed COMET as a computationally efficient method that makes use of GWAS summary statistics to test categories for enrichment of variants that are associated with multiple traits, accounting for chance overlap due to the marginal associations of each trait; individual trait-specific tests of enrichment are also encompassed. In the association classification of variants we used a Bayesian threshold of log 10 (ABF)40.695 (based on R = 20, π 0 = 0.99) that corresponds to a P-value threshold of 0.004-0.01, depending on the study size. 8 This lenient threshold allows us to highlight new overlapping variants not already known to be genome-wide significant, and such variants that fall within an identified enrichment category (ie, a category predictive of overlapping association) may have a COMET power for detecting Q5 as a category positively enriched with overlap signals at coefficient significance level 0.05. In each of the 1000 simulations, the Q5 category (1.4% of common CEU SNPs LD-pruned at r 2 40.1) was set to have a certain proportion of shared causal variants. The selected proportion of causal variants in this category p′ 12 is indicated in each column, followed by the proportion among the causal variants p′ 12 /p 12 , as a percentage. Studies 1 and 2 are each equal-sized case-control studies of N 1 each and N 2 each, respectively. Type I error is denoted by bold font. stronger prior probability for having true associations with each trait. Enrichment categories may also indicate a direction of refinement for future searches for overlap variants. For example, our analysis suggests that being a potentially deleterious variant is a predictive factor for shared associated variants between glycaemic traits. Therefore, further shared associations may be revealed through the analysis of wholeexome or whole-genome data, which are enriched for potentially deleterious variants that are generally poorly represented in other genome-wide association arrays.
As a means of pre-assessing the usefulness of a set of functional annotation covariates for our model, we compared the proportion of covariate-positive trait-associated variants (with Po5 × 10 − 6 ) for an assortment of traits. However, by considering the proportion of associated variants that are positive for each covariate there is a range of confidence interval sizes for the traits, as the confidence interval depends on the number of associated variants that are listed in the NHGRI-EBI GWAS catalogue. 15 A further limitation is that the results in the GWAS catalogue rely on a variety of studies, having a range of sample sizes, which in turn influences the ability to detect trait associations within each study. Therefore, the ability to detect enrichment based on these proportions is heavily influenced by the number of listed trait-associated variants. This pre-assessment gives further support for our approach of detecting enrichment of associated variants within covariates, rather than detecting enrichment of covariates within associated variants.
In an application to glycaemic traits we detect enrichment of associated variants (marginal and/or shared) within several functional annotation classes, and identify well-established positive controls, together with their biological support. The two glucose traits appear to have more overlapping variants falling within some categories than expected by chance, suggesting that these two traits are similar to each other, as expected.
The missense variant rs1260326 (hg19 chr2:g.27730940T4C; in GCKR) is associated with all three traits, and genome-wide significant for FG, 16 2G, 17 blood metabolite levels, cardiovascular disease risk factors, metabolic and lipid traits, gout, liver enzyme levels, and chronic kidney disease. 15 Additional variants within GCKR are genome-wide significant for FI-related traits 16 and Crohn's disease. An additional missense variant rs13266634 (hg19 chr8: g.117172544C4T; in SLC30A8) is associated with both FG and 2G, and genome-wide significant for T2D, 18 FG, 16 fasting proinsulin levels, 19 and glycated haemoglobin levels. 20 These results are positive controls, since the variants were known to be genome-wide significant for the traits and our method both detects this overlap and suggests that these numbers are greater than expected by chance.
The gene TCF7L2 is known to be associated with T2D and glycaemic traits 13 and within it we identify two overlap SNPs that are in low LD (r 2 = 0.089) with each other: rs7903146 is detected for each pair of traits and rs7079711 is identified for FI-FG. The SNP rs7903146 acts as a positive control, since it is the lead SNP in TCF7L2 for associations with T2D, 18 FI-related traits (interaction with BMI), 5 and FG-related traits (interaction with BMI) 5 and is also genome-wide significant for 2G 16 and FG; 16 this SNP is also our top signal for the FI-2G overlap and for each of FG and 2G. A further positive control is detection of the FG-2G variant rs11708067 (in ADCY5), which is known to be associated with FG 16 and is in LD with a known 2G-associated SNP rs2877716 (r 2 = 0.807). 17 Each FI-2G variant that is regulatory in a metabolism-involved tissue is within a gene containing FI-or T2Dassociated variants (Po5 × 10 − 6 ).
The top FI-FG signal is rs6984305 (in RP11-115J16.1), which is regulatory in tissues from the pancreas, liver, cardiac muscle and skeletal muscle. In the MAGIC data under analysis, this SNP is genome-wide significant for FG (P-value 2.67 × 10 − 8 ; ABF 5.63) and highly significant for FI (P-value 3.36 × 10 − 7 ; ABF 4.10); rs6984305 is also in LD (r 2 = 0.614) with a known genome-wide significant FG (interaction with BMI)-associated SNP, rs4841132. 5 Several SNPs are of interest for further investigation, as they (and SNPs in LD with them) have not been previously identified as associated with glycaemic traits. The SNP rs4736324 (in LYPD2, which harbours variants associated with body fat distribution) is regulated in pancreas tissue/islets and is a FG-FI variant. Likewise, rs2014712 (in KCNK9 and regulated in liver tissue) is an FG-FI variant and variants in KCNK9 are associated with adiponectin levels, cholesterol and CAD. Variant rs598725 (downstream RP4-60717.1) is a FG-2G variant and is regulatory in both skeletal and cardiac muscles. Most of the overlap SNPs that are regulatory in a nonmetabolic-involved tissue are not in LD with a variant that is associated (at R = 20, π 0 = 0.99) with more than one glycaemic trait. The exception is rs17036328 (within PPARG), which is in perfect LD with several variants that meet significance for each of FG, FI (genome-wide level) and 2G; two of these perfect LD variants are regulatory in cardiac and skeletal muscles.
Enrichment of variants associated with FG, FI, 2G, and FG-2G among regulatory variants in pancreatic islets concurs with the result that islets are enriched in loci that are associated with FG and T2D. 13 Among regulatory variants in liver tissue, there is enrichment of FI and FG variants, though not 2G variants, aligning with the finding that individuals with impaired FG have hepatic insulin resistance, while those with impaired glucose tolerance (as measured by 2G) have normal to slightly reduced hepatic insulin sensitivity. 21 This suggests that the liver plays a relatively more important role in influencing FG than 2G. Enrichment of FI-associated variants in liver tissue may coincide with insulin regulating glucose production in the liver during the fasting state. Enrichment of glucose trait variants in cardiac and skeletal muscle is likely linked with muscle being a target organ for insulin.
A possible limitation of the proposed approach is that the SNPs included in the analysis need to appear in both trait data sets, though imputed results are often available, so this may not have a significant impact. It is possible that, as we are limited by the set of SNPs available in both studies, the associated SNP may be a tag SNP for the causal variant, which is in a different covariate category, so that the enrichment category does not contain this causal variant. However, for covariate categories with a proportion of SNPs 41%, there would need to be some number of associated variants within the category in order for enrichment to be detected. It is highly unlikely that the majority of associated SNPs in the detected enrichment category are each a tag SNP for a causal variant in a different category. Therefore, even if this is true for an associated SNP, there is no change to the general biological interpretation of the covariate category being enriched for associated SNPs, as a set of associated SNPs has been detected in the category.
Alternative covariates to functional annotations may be trait × SNPspecific, to inform about whether overlap SNPs occur more likely than by chance within a certain trait feature, such as previously identified trait-associated SNPs (using information obtained from NHGRI-EBI). Additional covariate possibilities include SNP presence/absence in at least one gene (+/ − 50 kb buffer region) that has been identified as harbouring a trait-associated variant (Po5 × 10 − 8 ), or a less stringent classification (5 × 10 − 4 oPo5 × 10 − 8 ), to increase the chances of finding novel results.
The proposed approach may also be used for pathway-based analyses, where the covariate indicates whether or not the SNP is in a certain pathway, of relevance to one of the traits. For genes in a given pathway (or group of related pathways), a covariate may be defined according to presence/absence of the variant within at least one gene (+/ − 500 kb buffer) in the pathway; an additional covariate may be defined as presence/absence of variant 500 kb away from gene and closer than 1000 kb. This pair of covariates may be used in a separate overlap model for each pathway (or pathway group) of interest.
In conclusion, our proposed procedure for identifying features predictive of overlap informs biological interpretation and enables refinement of the set of variants considered in further searches for predisposing variants for both traits. | 2018-04-03T01:18:11.764Z | 2016-12-21T00:00:00.000 | {
"year": 2016,
"sha1": "8a5eaa1a8b45ed2d3cf3998948c3d5e7b3154ed2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ejhg2016171.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ab3df377590e30785c7576a7529bbff5ee04f34",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
4800778 | pes2o/s2orc | v3-fos-license | Fasudil improves endothelial dysfunction in rats exposed to chronic intermittent hypoxia through RhoA/ROCK/NFATc3 pathway
Endothelial dysfunction is one of the main pathological changes in Obstructive sleep apnoea (OSA). The Rho kinase (ROCK) pathway is associated with endothelial dysfunction. However, the interaction between ROCK and nuclear factor of activated T cells isoform c3 (NFATc3) in the development of this pathological response under chronic intermittent hypoxia (CIH) is unclear. To simulate the OSA model, we established a moderate CIH rat model by administering the fraction of inspired O2 (FiO2) from 21% to 9%, 20 times/h, 8 h/day for 3 weeks. Fasudil (ROCK inhibitor, 8 mg/kg/d, i.p.) was administrated in the rats exposed to CIH for 3 weeks. Our results demonstrated that CIH caused significantly endothelial dysfunction, accompanying with increased ET-1 level, decreased eNOS expression and NO production, which reduced ACh-induced vascular relaxation responses. Moreover, RhoA/ROCK-2/NFATc3 expressions were up-regulated. Fasudil significantly improved CIH induced endothelial dysfunction. Data suggested that the ROCK activation is necessary for endothelial dysfunction during CIH.
Introduction
Obstructive sleep apnoea (OSA) is a complete or partial airway obstruction, resulting in significant physiological disturbance with multiple clinical influences [1]. The aetiology of OSA is multifactorial, and it's reported the patients exhibited snoring at night, headache while waking up, sleepiness in the daytime and decreasing cognitive performance in clinically [2]. Recent epidemiological studies have revealed that the OSA prevalence was approximately 3-7% in men and 2-5% in women [3,4]. Studies have shown that OSA could increase the prevalence and incidence of cardiovascular diseases [5,6], such as atherosclerosis, coronary heart disease, heart failure, arrhythmia and hypertension.
There may be many possible influencing factors linking OSA with cardiovascular diseases; however, the specific mechanism has not been fully elucidated. Some studies have shown that endothelial dysfunction, as part of the pathogenesis of cardiovascular diseases, was PLOS significantly correlated with OSA [7]. The vascular endothelium participates in the release of multiple vasoactive factors, including the vasodilator nitric oxide (NO) and the vasoconstrictor endothelin-1 [8], which played a major role in the pathogenesis of cardiovascular problems such as atherosclerosis, systemic and pulmonary hypertension, and cardiomyopathies [9]. OSA is characterized by chronic intermittent hypoxia (CIH) and CIH could trigger systemic endothelial dysfunction, which suggested that regulating the ability of vascular tone and repair capacity in the endothelium were weakened [10]. In rats exposed to CIH, the circulating endothelin-1 (ET-1) level and the susceptibility of vasoconstriction to ET-1 were enhanced [11,12], and vascular NO bioavailability was decreased [10]. The small GTP-binding protein RhoA and its downstream target, Rho kinase (ROCK), have recently been studied in the cardiovascular field. Activated ROCK was associated with atherosclerosis and arterial hypertension in experimental rat models [13,14] and clinical patients [15,16]. Studies have shown that the ROCK inhibitor (fasudil) treatment could decrease the atherosclerosis lesions through decreasing the thickness of arterial intima medial and macrophage accumulation [17]. On the other hand, nuclear factor of activated T cells isoform c3 (NFATc3) belongs to the NFAT transcription factors family that have the nature of calcineurin-dependent nuclear translocation. It is important to note that the activation of Rho/ ROCK is involved with pathways that regulate NFAT activity [18]. Some studies have demonstrated that NFATc3 was related to pulmonary hypertension induced by CIH in mice [19,20], however, the mechanism by which RhoA/ROCK/NFATc3 mediates CIH-induced endothelial dysfunction has not been fully clarified.
In the study, we imitated OSA using a rat model of CIH to investigate the role of ROCK, and detected whether CIH might affect RhoA/ROCK/NFATc3 mediated endothelial dysfunction in aortas. Therefore, in this study we hypothesized that the fasudil treatment could inhibit the CIH-induced endothelial dysfunction in rats. Further, we investigated if fasudil would restore endothelial dysfunction induced by CIH and its mechanisms.
Experimental animals
Ethical approval. All procedures were performed based on the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were authorized by the Animal Care and Use Committee of Medical Ethics of Hebei University of Chinese Medicine (approval number: HEBUCM-2014-07; approval date: July 01, 2014). Adult male Sprague-Dawley rats (190-220g) were purchased from the Hebei Experimental Animal Center (Shijiazhuang, China). All rats were given free access to food and water, housed under constant temperature and controlled illumination. All rats were allowed to adapt to their living conditions for at least 7 days before experiment.
The test of fasudil. To assess the effect of fasudil on the endothelial function, an experiment was firstly performed. Fasudil was purchased from Cheng Tian Heng Chuang Biological Technology Company. Twelve rats were randomly divided into control group and fasudil group. Rats in fasudil group were administered with fasudil (8 mg/kg/day, i.p., once a day, at 9 a.m.) for 4 weeks. While, the rats in control group were received an equal volume of normal saline. All rats were observed daily for general status, behaviour, morbidity and mortality. Body weight (BW), tail-cuff systolic blood pressure (SBP) and heart rate (HR) were measured at the beginning of the study and once a week thereafter. Food consumption was recorded weekly. 24 h after the last administration of the drug, two groups of rats were sacrificed. Their blood samples were used for blood biochemistry, and their aortas were dissected for histopathological analysis.
Experimental grouping and CIH model. SD rats (n = 36) were randomly divided into three groups (n = 12, for each group): normoxia control group (Normoxia), CIH model group (CIH) and fasudil-treated CIH model group (CIH + Fa). These rats were housed in special hypoxic chambers with a controlled gas delivery system that monitored the flow of air, nitrogen and oxygen into the chambers. The fraction of inspired oxygen (FiO 2 ) provided to the chambers for the CIH and the CIH + Fa groups declined from 21% to 9% for 90 s, and then gradually increased to 21% with re-oxygenation in the subsequent 90 s period. The exposure cycle was repeated every 3 min for 8 h/day, for 3 weeks. In addition, the rats in the CIH + Fa group were also successively given fasudil (8 mg/kg/day, i.p., once a day) for 3 weeks. Rats in the Normoxia and CIH groups were injected with an equal volume of normal saline at the same time points.
Tissue and blood sample processing. After 21 days, the rats were fasted for one night. They were weighed and anaesthetized with pentobarbital (100 mg/kg, i.p.). Half of the rats in each group (n = 6, for each group) were used for the ACh-induced vascular relaxation responses study, and the other half of rats in each group (n = 6, for each group) were used for blood and tissue determination. Blood samples were obtained from the femoral aorta, and the serum was separated and collected for biochemical analysis. At the same time, thoracotomy and thoracic aorta tissues were collected. Collected thoracic aorta tissues were used for western blot analysis, nitrate reductase method detection and histological analysis.
Evaluation of vasodilator responses
The isolation of aortic vessels. The thoracic aortas were excised and immediately placed in 4˚C physiological saline solution (PSS, pH 7.4, 133.1 mM NaCl, 4.7 mM KCl, 0.61 mM MgSO 4 , 1.3 mM NaH 2 PO 4 , 16.7 mM NaHCO 3 , 2.5 mM CaCl 2 , 7.6 mM Glucose). The thoracic aortas were carefully isolated and cut into 3 mm rings. Then, vessel endothelia were stripped mechanically by inserting watchmaker's forcep tips into the vascular lumen and the vessel was repeatedly rotated on saline-saturated filter paper [21].
Detection of vasodilator responses. Rings of arteries were suspended horizontally in organ chambers filled with 6 ml PSS sustained at 37˚C and inflated with 95% O 2 and 5% CO 2 . Two stainless steel wires passed through the vessel ring lumen; one was fixed to the bottom of the organ chamber, and the other was attached to a strain gauge. Isometric tension was measured with a Power-Lab/8sp recording and analysis system (Model ML785; AD Instruments, Castle Hill, NSW, Australia).
Each vascular ring was progressively extended to its optimal resting tension until the contraction force of the vascular ring in 70 mM KCl reached a plateau; the optimal resting tension of rat thoracic aortas was 1.5 g. Each ring was equilibrated for 1 hour. After equilibration, viability was verified by contraction with 10 −6 M phenylephrine (PE, Sigma Chemical, St. Louis, MO), and vasodilator responses to vasodilator acetylcholine (ACh, Sigma, 10 −6 M) and sodium nitroprusside (SNP, Sigma, 10 −6 M) were tested.
Endothelium denudation was confirmed as a < 5% relaxation response to 10 −6 M ACh in rings preconstricted with 10 −6 M PE. To determine whether CIH affected vasodilator responses, basal aortic tone was tested by pre-incubating the rings with 10 −6 M PE and examed relaxation responses to 10 −6 M ACh and 10 −6 mol/L SNP. Relaxation responses to ACh and SNP are expressed as a percentage of the PE-induced tone.
Measurement of ET-1 levels in serum
The serum ET-1 concentration was measured by radioimmunoassay (RIA) as per the instructions of the kit (SenBeiJia Biological Technology, Nanjing, China). By using the non-equilibrium method, we made a standard curve and the content of samples was calculated.
Measurement of NO content in aortic tissue and serum
The NO content was defined by measuring total nitrate and nitrite concentrations (Nitric Oxide Assay Kit: Nanjingjiancheng Biological Engineering Institute, Nanjing, China). This assay determined total content of NO based on the enzymatic conversion of nitrate to nitrite by nitrate reductase. The reaction was followed by the colourimetric detection of nitrite as an azo dye product of the Griess reaction. The absorbance of the compound at 550 nm was detected with a microplate reader.
Histological analysis
The aortic tissues removed from all rats were fixed in 4% paraformaldehyde for 48 h. After fixation, the tissues were dehydrated in alcohol gradient and embedded in paraffin. Tissue slices were cut at 5 μm thickness and stained with haematoxylin and eosin (H&E) for histological analysis. Each section was observed under 10 × 40 light microscopic fields with an optical microscope (Olympus Japan Co., Tokyo, Japan).
Statistical analysis
Results are presented as the mean ± SE. For vasodilator responses studies, statistical analysis was carried out using two-way ANOVA tests followed by Bonferroni's post hoc analysis. For other data, statistical analysis was carried out using a one-way ANOVA followed by Tukey's post hoc test. The significance level was set at 0.05. All analyses were carried out using SPSS 19.0 software.
Results
The fasudil has no effect on endothelial function Rats received fasudil (8 mg/kg, i.p.) did not show any effects on body weight, behaviour and mortality during the four-week observation period. There were no remarkable differences in related biochemical indicators, systolic blood pressures and heart rate (Table 1), and there were no obvious histological changes of the aorta in the fasudil group, compared to the control group (Fig 1).
Fasudil improved histopathological change of vascular endothelium in aortas
H&E stain presented the integrity of the vascular endothelium, consisting of an unbroken endothelial monolayer with regularly shaped and arranged endothelial cells in the Normoxia group (Fig 2). However, the endothelial layer exhibited remarkable histopathological changes in the CIH group, showing cellular oedema and partial exfoliation of endothelial cells (Fig 2). The histopathological change of vascular endothelium was improved in the CIH + Fa group compared with the CIH group (Fig 2).
Fasudil improved vasodilator responses dysfunction in rats exposed to CIH
To determine whether CIH affected endothelium-dependent or endothelium-independent vasodilation in rat aorta, we examined relaxation responses to ACh and SNP in endotheliumintact and endothelium-denuded aortas from rats in Normoxia, CIH and CIH + Fa group. The results showed that relaxation responses induced by ACh in the endothelium-intact CIH group decreased significantly compared with the Normoxia group and, fasudil significantly inhibited the decreased ACh-induced relaxation responses of the endothelium-intact CIH group (P < 0.05) (Fig 3A and 3C). However, ACh-induced relaxation responses showed no significant differences in all endothelium-denuded groups (Fig 3B and 3C). ACh-induced relaxation responses of all endothelium-denuded groups were significantly lower than that of all endothelium-intact groups (Fig 3C). The relaxation responses induced by SNP was not significantly different in all groups (Fig 3A and 3B and 3D). Fasudil increased NO in serum and aortic tissue in rats exposed to CIH NO generated by endothelial cell played an important role in maintaining vascular microenvironment [22]. The total NO levels in serum and the aorta were dramatically reduced in CIH rats compared with the Normoxia group (P < 0.05), whereas the level of NO significantly increased in the CIH + Fa group compared with the CIH group (P < 0.05) (Fig 4A and 4B). We also studied the effects of CIH on endothelial cell generating NO, and measured the marker of eNOS activity, eNOS (Ser1177) phosphorylation. The phosphorylation of eNOS (Ser1177) could accelerate NO production and dephosphorylation would decrease NO production [23]. As shown in Fig 4C and 4D, the levels of eNOS and p-eNOS (Ser1177) significantly decreased in CIH aortas compared with the Normoxia group, however, the levels of eNOS and p-eNOS were both increased with fasudil treatment (P < 0.05), comparing to CIH group.
Fasudil decreased ET-1 in serum and aortic tissue in rats exposed to CIH
The level of ET-1 in serum showed a marked increase in the CIH group compared with the Normoxia group (Fig 5A), and western blot results revealed the ET-1protein level in aorta tissue also raised ( Fig 5B). While, treatment with fasudil significantly prevented ET-1 increases in the serum and aorta tissue induced by CIH (P < 0.05) (Fig 5A and 5B). RhoA/ROCK/NFATc3 pathway involved in chronic intermittent hypoxia
RhoA/ROCK/NFATc3 pathway mediated improving effect of fasudil on CIH
However, which signal pathway did regulate the level of NO in the endothelium-intact aortas during CIH? We firstly detected the RhoA protein level in the aortas tissue. Fig 6A showed the level of RhoA was higher in CIH group (P < 0.05) than that in the Normoxia group. And the level of ROCK-2 was also elevated (P < 0.05) in the CIH group (Fig 6B). While, fasudil treatment could decrease the expression of RhoA and ROCK-2 protein, comparing to the CIH group (Fig 6A and 6B).
As shown in Fig 6C, p-MYPT1/t-MYPT1 was significantly elevated in the CIH group compared with the Normoxia group (P < 0.05). However, treatment with fasudil significantly decreased p-MYPT1/t-MYPT1 compared with the CIH group (Fig 6C). RhoA/ROCK/NFATc3 pathway involved in chronic intermittent hypoxia NFATc3 is considered a downstream substrate of ROCK and may be involved in CIHinduced endothelial dysfunction. Results showed that the expression of NFATc3 protein increased in the CIH group compared with the Normoxia group (P < 0.05). Treatment with fasudil inhibited the increases of expression of NFATc3 compared with the CIH group (Fig 7).
Discussion
OSA, a worldwide sleep-breathing disease, is known as an independent dangerous factor for cardiovascular diseases [24]. OSA elicited CIH, which contributed to endothelial dysfunction and cardiovascular diseases [25]. In the present study, a CIH rat model was established by simulating the OSA state. The results of our study showed that CIH-induced endothelial dysfunction associated with increased ET-1 and decreased NO in rat aorta. Fasudil attenuated endothelial dysfunction induced by CIH through inhibiting ROCK activation. Thus, RhoA and ROCK activity played an important role in the pathogenesis of CIH by mediating a potent vasoconstrictor response. Furthermore, we demonstrated that increased RhoA/ROCK/ NFATc3 pathway and ROCK activity were associated with a functional decrease in endothelium dependent vasodilation in aortas, which contributes to the pathogenesis of CIH-induced dysfunction of endothelium. The vascular endothelium plays an important role in the regulation of various vascular functions and homeostasis [26]. The damage and/or malfunction of the artery endothelium might be associated with various cardiovascular diseases. Endothelial dysfunction is considered an early marker of vascular abnormalities before clinically obvious cardiovascular disease [27][28][29]. Damage to the artery endothelium might cause abnormal release of vasoactive factors, disrupting the balance of its own regulation system, such as up-regulating ET-1 levels and down-regulating NO production. . (B) NO production was measured in aortas isolated from the Normoxia, CIH and CIH + Fa groups with a Griess assay. (C) eNOS protein was measured in aortas isolated from the Normoxia, CIH and CIH + Fa groups by Western blotting. (D) p-eNOS (Ser1177) protein was measured in aortas isolated from the Normoxia, CIH and CIH + Fa groups by Western blotting. The results were expressed as the mean ± SE. Ã p < 0.05, CIH group vs Normoxia group; # p < 0.05, CIH + Fa group vs CIH group (n = 6 for each group). https://doi.org/10.1371/journal.pone.0195604.g004 RhoA/ROCK/NFATc3 pathway involved in chronic intermittent hypoxia
Fig 5. Levels ET-1 in the serum and aorta when subjected to CIH. (A) ET-1 content was measured in serum from
the Normoxia, CIH and CIH+Fa groups by radioimmunoassay. (B) ET-1 protein levels were measured in aortas isolated from the Normoxia, CIH and CIH + Fa groups by western blot. The results were expressed as the mean ± SE. Ã p < 0.05, CIH group vs Normoxia group; # p < 0.05, CIH + Fa group vs CIH group (n = 6 for each group).
https://doi.org/10.1371/journal.pone.0195604.g005 In the past few decades, a large body of evidence had indicated that the endothelial cell was capable of releasing vasoactive substances. The imbalance between ET-1 and NO levels of serum greatly contributed to the risk of cardiovascular diseases [30]. Previous studies have shown that vascular endothelial dysfunction occurred in the CIH model prior to the development of cardiovascular diseases, which suggested that systematic endothelial dysfunction was the starting phase of cardiovascular diseases by CIH [31]. Our study provided direct evidence of vascular endothelial dysfunction in CIH rats. NO content was significantly lower and ET-1 levels were significantly higher in CIH rats. These changes was improved by treatment with fasudil. The results suggested that vascular endothelial dysfunction was the earliest cardiovascular abnormality in OSA and contributed to the subsequent development or progression of OSA-related cardiovascular disease [32].
Both ACh and SNP are common vasodilators. It is well known that vasodilatation caused by ACh involves the release of endothelium-derived NO, and relaxation caused by SNP does not involve NO [21]. Previous studies have shown that the vascular reaction to ACh was mediated by NO released from the vascular endothelium of skeletal muscle and cerebrum of rats [32], and it was reported that the ACh-induced vasodilatation in two types of arteries was damaged in CIH [33]. Study showed that even mild OSA was accompanied by decreased endothelium-dependent vascular dilation [34]. NO, as a main vasodilator, was synthesized in the vascular endothelium and decreased in the plasma of patients with OSA [35,36]. It has been shown that endothelial dysfunction induced by CIH is a systemic pathological condition of the vascular endothelium, not just the peripheral vascular but also the aorta. Aortic endothelial dysfunction could promote the occurrence of cardiovascular events in OSA patients [37,38]. To evaluate whether CIH affected endothelium-dependent vasodilation function, vasodilator responses to SNP and ACh were examined in the aorta. Our results showed that CIH exposure impaired endothelium-intact relaxation responses to ACh, and fasudil significantly alleviated the impaired ACh-induced relaxation responses. However, CIH exposure did not affect endothelium-intact aortic vasodilator responses to SNP. These data implied that endotheliumderived vasodilation was impaired by CIH. In addition, these results further supported that ROCK involved with the CIH-induced endothelial dysfunction.
As an important biomarker of endothelial function, NO was synthesized from its precursor L-Arginine by a family of NOS. eNOS might mediate endothelial NO generation and release [39]. Production of NO in endothelial cells by eNOS was modulated by phosphorylation of Fig 7. Expression of NFATc3 protein in aortas when subjected to CIH. NFATc3 protein levels were measured in aortas from Normoxia, CIH and CIH + Fa groups by Western blotting. The results were expressed as the mean ± SE. Ã p < 0.05, CIH group vs Normoxia group; # p < 0.05, CIH + Fa group vs CIH group (n = 6 for each group). eNOS, and eNOS ser1177 phosphorylation leads to NO production increases. Down-regulation of vascular eNOS and reduced activation of eNOS were characteristic of vascular endothelial dysfunction [22]. NFAT could play a potential role in endothelial dysfunction and inhibition eNOS [40]. A previous study showed that NFATc3 might contribute to arterial remodeling associated with hypoxia-intermittent hypoxia [41], and NFAT was a novel mechanism causing endothelial dysfunction under hyperglycaemia [42]. In the present study, exposure to CIH reduced the generation of NO, inhibited eNOS protein expression, reduced eNOS activation and increased NFATc3 protein expression in rat aortas, which were improved by fasudil treatment. These data suggested that increased NFATc3 expression played a role in CIH-induced endothelial dysfunction. Therefore, it is possible that ROCK is associated with NFATc3/eNOS pathway in the CIH condition.
Studies demonstrated that RhoA/ROCK activation played an important role in various cardiovascular diseases [43], and acted as a convergent node in the pathogenesis of vascular diseases [44]. Inhibition of this signalling pathway could reduce the risk of adverse cardiovascular events and provided pharmacological tools for vascular studies [45][46][47][48][49]. More evidence showed that eNOS expression and activity were regulated by RhoA/ROCK [50], and RhoA/ ROCK negatively regulated eNOS (Thr495) and eNOS (Ser1177) and decreased vasodilation [26]. RhoA/ROCK decreased eNOS expression through down-regulation of eNOS mRNA stability, and decreased eNOS activity through inhibition of eNOS phosphorylation at Ser1177 via the PI3-kinase/Akt pathway and acceleration of eNOS phosphorylation at Thr495 [51]. In hypertensive profilin1 transgenic mice, activation of the RhoA/ROCK pathway significantly inhibited eNOS expression and phosphorylation (Ser1177) in the mesenteric arteries [52]. ROCK blockers, which block ROCK activity, can prolong the eNOS mRNA biological half-life and increase eNOS expression in vascular disease. MYPT1 is a major downstream target of ROCK. In recent studies, measurement of p-MYPT/t-MYPT was used as an indirect method for assessing ROCK activity [53]. Thus, MYPT and p-MYPT proteins were measured to indirectly determine the activation of ROCK-2 in our study. Our results showed that CIH significantly elevated p-MYPT1/t-MYPT1 and increased RhoA and ROCK protein expression. Previous study has showed that ROCK inhibition prevents intermittent hypoxia-induced NFATc3 activation in mouse mesenteric arteries both in vivo and ex vivo [41]. Our results demonstrated that CIH up-regulated NFATc3 expression in rat aorta arteries, which was dependent on RhoA/ROCK pathway. Acting as the downstream target of RhoA/ROCK, whether NFATc3 was involved in the regulation of eNOS expression via the RhoA/ROCK pathway was not clear. Our study showed that CIH increased RhoA/ROCK-2/NFATc3 protein expression and ROCK-2 activation, and inhibited eNOS expression and phosphorylation (Ser1177), and reduced NO production. The results suggested that the pathway of RhoA/ ROCK/NFATc3 contributed to endothelial dysfunction by CIH. In the present study, inhibition of the RhoA/ROCK/NFATc3 pathway by fasudil in CIH rats increased eNOS and NO levels, and decreased ET-1 levels and maintained the balance of ET-1 and NO. These data further suggested that the RhoA/ROCK/NFATc3 pathway could mediate endothelial dysfunction by CIH in aortas (Fig 8).
Conclusions
In conclusion, this study demonstrates that CIH-induced endothelial dysfunction in OSA is mediated by eNOS/NO reduction through RhoA/ROCK/NFATc3 pathway activation. ROCK inhibited by fasudil significantly improves CIH induced endothelial dysfunction in rats. Thus, fasudil might be a feasible therapeutic option to the progression to cardiovascular diseases in OSA. | 2018-04-26T23:46:28.883Z | 2018-04-11T00:00:00.000 | {
"year": 2018,
"sha1": "408bd340c8c9d452957eed419bf9e5c708d0b82c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0195604&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "408bd340c8c9d452957eed419bf9e5c708d0b82c",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46886679 | pes2o/s2orc | v3-fos-license | Land Use Change over the Amazon Forest and Its Impact on the Local Climate
One of the most important anthropogenic influences on climate is land use change (LUC). In particular, the Amazon (AMZ) basin is a highly vulnerable area to climate change due to substantial modifications of the hydroclimatology of the region expected as a result of LUC. However, both the magnitude of these changes and the physical process underlying this scenario are still uncertain. This work aims to analyze the simulated Amazon deforestation and its impacts on local mean climate. We used the Common Land Model (CLM) version 4.5 coupled with the Regional Climate Model (RegCM4) over the Coordinated Regional Climate Downscaling Experiment (CORDEX) South America domain. We performed one simulation with the RegCM4 default land cover map (CTRL) and one simulation under a scenario of deforestation (LUC), i.e., replacing broadleaf evergreen trees with C3 grass over the Amazon basin. Both simulations were driven by ERA Interim reanalysis from 1979 to 2009. The climate change signal due to AMZ deforestation was evaluated by comparing the climatology of CTRL with LUC. Concerning the temperature, the deforested areas are about 2 ◦C warmer compared to the CTRL experiment, which contributes to decrease the surface pressure. Higher air temperature is associated with a decrease of the latent heat flux and an increase of the sensible heat flux over the deforested areas. AMZ deforestation induces a dipole pattern response in the precipitation over the region: a reduction over the west (about 7.9%) and an increase over the east (about 8.3%). Analyzing the water balance in the atmospheric column over the AMZ basin, the results show that under the deforestation scenario the land surface processes play an important role and drive the precipitation in the western AMZ; on the other hand, on the east side, the large scale circulation drives the precipitation change signal. Dipole patterns over scenarios of deforestation in the Amazon was also found by other authors, but the precipitation decrease on the west side was never fully explained. Using budget equations, this work highlights the physical processes that control the climate in the Amazon basin under a deforestation scenario.
Introduction
One of the most important anthropogenic influences on the climate is related to land use change (LUC).In particular, the Amazon (AMZ) basin is a highly vulnerable area to climate change due to substantial modifications of the hydroclimatology of the region expected as a result of LUC forcing.The AMZ forest is the largest tropical rainforest on Earth [1].It covers approximately 5.5 million km 2 , where Brazil comprises 60% of the area [2].Deforestation (such as biomass burning and forest fragmentation) and land use change and their impacts on the climate are some of the issues presented in this region.The Instituto de Pesquisas Espaciais (INPE) from Brazil [3] indicated that the rate of deforestation in the Brazilian Amazon between 2000 and 2009 was one of the fastest in the world, averaging 17,486 km 2 per year.More recently it was inferred for the southern AMZ that an area of 191,319 km 2 underwent changes in land cover during the period of 1970 to 2012 [4].Deforested areas present higher surface albedo when compared with areas without changes [5,6].Moreover, under deforestation the moisture storage capacity decreases, affecting the local energy budget, i.e., sensible heating increases whereas latent heating decreases [7].Deforestation/land use change also contributes to increased greenhouse gas emissions into the atmosphere [8][9][10].
The AMZ climate is affected by both local and external (located outside the region) moisture sources.Concerning the water balance in the atmospheric column, the total water precipitating on large continental regions is supplied by the advection from the external surrounding areas and by land surface evaporation and transpiration within the region [11].The hydroclimatic regime variability of the AMZ is affected by local climate feedbacks.Evapotranspiration (ET, [12]) plays an important role in precipitation, and it is affected, as well, by large scale climate patterns, such as sea surface temperature (SST) anomalies [12][13][14][15].Eltahir, E.A.B. et al. [16] estimated that about 25% of the precipitation in the AMZ basin is provided by evaporation within the basin.According to [14], the tropical Atlantic is a remote source of humidity for the AMZ basin with its northern sector contributing mainly during the austral summer.
One of the first studies investigating the conversion of Amazon tropical forest into grass and crops was [17].The authors used a coupled numerical model of the global atmosphere and biosphere and found an increase in mean surface temperature and a decrease in evapotranspiration, precipitation and runoff when the forest is converted to grass.The National Center for Atmospheric Research GENESIS atmospheric general circulation model, coupled to the integrated biosphere simulator, was applied to study the combined effects of large-scale deforestation and increased CO 2 concentrations on the AMZ climate [9].Considering only deforestation, the basin-average precipitation decreases by 0.73 mm day −1 due to the general weakening in the vertical motion above the deforested area.Werth, D. et al. [18] quantified the effects of LUC in the Amazon on the local and global climate.For this reason, experiments with the global Goddard Institute for Space Studies Model II were carried out, replacing rainforest by a mixture of shrubs and grassland.In these simulations, the precipitation, evapotranspiration and cloudiness were reduced, corroborating previous results [9,17].It was also verified that Amazonian deforestation is significantly correlated with remote climate changes in several global areas [18].Brankovic, C. et al. [19] investigated how local and regional circulations are affected by changes in the surface energy and moisture balance over tropical South America.The authors carried out one-year-long ensembles with a relatively high-resolution European Centre for Medium-Range Weather Forecasts (ECMWF) model and showed that the impacts on precipitation were more concentrated in the tropical region than in the extratropics.Differently from previous authors, Salazar, L.F. et al. [20] studied the impact of climate change on vegetation over South America considering A2 and B1 emission scenarios from the Intergovernmental Panel on Climate Change (IPCC) and Coupled Model Intercomparison Project Phase 3 (CMIP3) projections.Their results indicated that increased air temperature may induce higher evapotranspiration in tropical regions and reduce the amount of soil water.It can trigger the replacement of tropical forests by savannas, mainly in the southeastern Amazon.
Most previous studies were conducted with global climate models.LUC studies with regional climate models (RCM) for the AMZ forest date from the year 2000.The impact of converting forest to pasture on the climate of the eastern portion of the AMZ basin was evaluated by Gandu, A.W. et al. [21] using two one-year-long simulations with a regional atmospheric modelling system (RAMS).These simulations showed that the deforestation increased the cloud cover and precipitation over upland areas, especially on slopes facing river valleys.With the reduction of the roughness length (forest to pasture), the wind speed increased near the Atlantic coast and it contributed to diminishing the local moisture flux convergence with a consequent decrease of rainfall totals in nearby regions.Four scenarios of LUC over the AMZ were analyzed by Correia, F.W.S. et al. [22]: (a) no deforestation, (b) current conditions, (c) deforestation projected for 2033 and (d) large scale deforestation.For this investigation, 13-month integrations were performed using the Eta model (name derived from the Greek letter) coupled to the simplified simple biosphere model (SSiB).The authors highlighted that partial deforestation can lead to an increase in precipitation locally, whilst increasing deforestation can lead to drier conditions.In terms of future projections, Salazar, L.F. et al. [23] performed simulations using a regional climate model from the Centro de Previsão de Tempo e Estudos Climáticos nested with the Potential Vegetation Model (CPTEC-PVM2.0)and different prescribed annual anomalies precipitation and temperature added to the observed climatology, and different levels of CO 2 fertilization effects under emission scenario A2.These simulations indicate that: (a) tropical forests might be replaced by seasonal forests or savanna over eastern Amazonia with temperature increases of 2-3 • C, when the CO 2 fertilization effect is not considered; (b) a decrease in precipitation greater than 30% may shift tropical forest to drier biomes in southeastern Amazonia.The Consortium for small scale modeling -Climate Limited-area Modelling Community (COSMO-CLM2) projected a dipole pattern in the precipitation field under an Amazon deforested scenario that was dry and wet, respectively, in the western and eastern sectors of the basin [24].Wetter conditions over western sector were associated with the increase of low levels of moisture transport, due stronger winds over deforested area, from the tropical Atlantic Ocean to the AMZ region.Moreover, a decrease in the upward vertical motion was noted in the eastern dry sector, indicating subsidence over this region.A decrease in precipitation mainly over the eastern Amazon in a deforestation scenario using the Regional Climate Model (RegCM3) was also found by Silva, M.E.S. et al. [25].According to [6], over deforested areas, the albedo-induced decrease in surface net radiation reduces the overall amount of energy transferred to the atmosphere.Therefore, less energy is available for convection.
In order to complement previous studies, we analyze how LUC (changing forest to grass) over the Amazon basin modifies the surface energy and water budgets in the atmosphere.These budgets are explored in order to explain the simulated changes in precipitation and air temperature over the Amazon due to LUC modifications.
RegCM4 Configuration and Experiment Design
The latest version of the Abdus Salam International Centre for Theoretical Physics (ICTP) regional climate model, RegCM4 [26] version 4.45, was used in this work.RegCM4 is an evolution of its previous versions [27] with many upgrades in several aspects of the model physics.A list of the available physical options in RegCM4 is given in [26].For example, different schemes are available to represent cumulus convection, such as the schemes of Grell [28] and Emanuel [29].The RegCM system has been being used by a wide research community for the last two decades [26,27] in several applications, including process studies, paleo and future climate simulations, land-atmosphere interactions and aerosol effects (e.g., [26,27]).The current version of RegCM4 has the Community Land Model version 4.5 (CLM4.5, [30]), as an alternative to the Biosphere-Atmosphere Transfer Scheme (BATS, [31]) to describe land surface processes.CLM4.5 presents substantial improvements compared to BATS to describe soil temperature and moisture transfers, vegetation and surface hydrology processes.
Water 2018, 10, 149 4 of 12 Spatial land surface heterogeneity in CLM4.5 is represented as a nested subgrid hierarchy in which grid cells are composed of multiple land units, snow/soil columns, and plant functional types.Details of CLM4.5 are found in the technical description of version 4.5 [31].
We performed two simulations covering the period 1979-2009, starting at 00:00 UTC on 1 January.The first simulated year (1979) was used as spin-up and, for this reason, it was discarded from the analysis.The control (CTRL) experiment assumes that over the Amazon the plant functional type for CLM4.5 is broadleaf evergreen tree (hereafter tropical rain forest), which considers the canopy top at 35 m tall.In the other simulation, hereafter referred as the LUC experiment, tropical rain forest was replaced with C 3 grass and a canopy top of 0.5 m.Both simulations used the Leaf Area Index (LAI) from monthly datasets with a spatial resolution of 0.5 • , as described in [31].Topography and land use data were taken from the United States Geological Survey (USGS) and Global Land Cover Characterization (GLCC), respectively, and they have 10' of horizontal resolution [32].Tropical rain forest was replaced by grass in the GLCC data to carry out the LUC experiment, in order to assess the underlying physical processes of this change over the AMZ.
The green area in Figure 1a represents the tropical rain forest (CTRL experiment) that was replaced with C 3 grass in the LUC experiment, whereas horizontal black lines indicate the cross section (5 • S-5 • N) selected for a detailed analysis of the signal of LUC over the tropical areas.
Water 2018, 10, x FOR PEER REVIEW 4 of 13 We performed two simulations covering the period 1979-2009, starting at 00:00 UTC on 1 January.The first simulated year (1979) was used as spin-up and, for this reason, it was discarded from the analysis.The control (CTRL) experiment assumes that over the Amazon the plant functional type for CLM4.5 is broadleaf evergreen tree (hereafter tropical rain forest), which considers the canopy top at 35 m tall.In the other simulation, hereafter referred as the LUC experiment, tropical rain forest was replaced with C3 grass and a canopy top of 0.5 m.Both simulations used the Leaf Area Index (LAI) from monthly datasets with a spatial resolution of 0.5°, as described in [31].Topography and land use data were taken from the United States Geological Survey (USGS) and Global Land Cover Characterization (GLCC), respectively, and they have 10' of horizontal resolution [32].Tropical rain forest was replaced by grass in the GLCC data to carry out the LUC experiment, in order to assess the underlying physical processes of this change over the AMZ.
The green area in Figure 1a represents the tropical rain forest (CTRL experiment) that was replaced with C3 grass in the LUC experiment, whereas horizontal black lines indicate the cross section (5° S-5° N) selected for a detailed analysis of the signal of LUC over the tropical areas.
RegCM4 is integrated with a horizontal grid spacing of about 50 km, 18 sigma-pressure vertical levels for South America (SA) CORDEX [33] domain (Figure 1b), which covers SA and adjacent oceans, and we used 100 s as a model time step.Atmospheric variables and SST were provided by the 1.5° × 1.5° ERA-Interim reanalysis dataset [34] as the initial and boundary conditions for RegCM4.45.The simulations used the Emanuel scheme for cumulus convection according to [35,36], which showed the combination of the Emanuel and CLM results in smaller errors in the RegCM4 simulations over SA than in other convection schemes.
Analysis
The first part of the results presents the validation of the CTRL experiment (air temperature and precipitation) by comparing it with the monthly observed data from the Climate Research Unit (CRU, [37]).The air temperature and precipitation from the CRU were obtained using only observed data from surface stations over land, at 0.5° × 0.5° horizontal resolution.RegCM4 is integrated with a horizontal grid spacing of about 50 km, 18 sigma-pressure vertical levels for South America (SA) CORDEX [33] domain (Figure 1b), which covers SA and adjacent oceans, and we used 100 s as a model time step.Atmospheric variables and SST were provided by the 1.5 • × 1.5 • ERA-Interim reanalysis dataset [34] as the initial and boundary conditions for RegCM4.45.The simulations used the Emanuel scheme for cumulus convection according to [35,36], which showed the combination of the Emanuel and CLM results in smaller errors in the RegCM4 simulations over SA than in other convection schemes.
Analysis
The first part of the results presents the validation of the CTRL experiment (air temperature and precipitation) by comparing it with the monthly observed data from the Climate Research Unit (CRU, [37]).The air temperature and precipitation from the CRU were obtained using only observed data from surface stations over land, at 0.5 • × 0.5 • horizontal resolution.The precipitation and temperature change signals due to AMZ deforestation were evaluated by comparing the climatology (1980-2009) of the LUC and CTRL.
To better understand the LUC signals, the climatology of other simulated variables were also analyzed (surface pressure, geopotential height, albedo, soil moisture, evapotranspiration, sensible (H), Water 2018, 10, 149 5 of 12 latent (LE) and soil heat fluxes (G)).Physical interpretation of the results was conducted analyzing the surface energy budget, which determines the amount of energy available to evaporate the surface water and to raise or lower temperature [38], and it can be defined as in Equation (1): The results section shows that precipitation presents a dipole pattern over the western and eastern AMZ basin (see in Figure 1b the location of these areas) in the LUC experiment.To better understand this pattern, the water balance in the atmospheric column was also calculated, i.e., the change of atmospheric water vapor storage over the western and eastern AMZ, using the formulation presented by [39], as follows in Equation ( 2): where dw/dt represents the water stock change (mm day −1 ), P is the precipitation (mm day −1 ), ET is the evapotranspiration (mm day −1 ), and C is the vertically integrated moisture flux convergence (mm day −1 ) between 925 and 100 hPa.C is calculated as follow (Equation ( 3)): where q and → V are, respectively: air specific humidity and horizontal wind vector.In Equation ( 2), dw dt can be ignored for longer periods, i.e., greater than a month [39], then Equation (3) to estimate water balance can be reduced as follows in Equation ( 4): Using the surface energy and water budgets can discriminate whether the precipitation change signal is affected by land-atmosphere feedback (i.e., evapotranspiration) or by large scale circulation patterns, such as the moisture transport from the Atlantic Ocean to the Amazon basin, or even if the precipitation signal is affected by both.
Validation of the Simulated Climatology in CTRL
Figure 2 shows the annual mean precipitation and air temperature at 2 m for both the CRU and CTRL simulation.The climatology of precipitation simulated by the CTRL experiment presents dry bias over the AMZ basin, while wet bias occurs over the west coast and southeast of SA and northeast Brazil (Figure 2a,b).Concerning air temperature, the CTRL experiment is colder than observations over the center-north SA (Figure 2c,d).In summary, over the AMZ basin RegCM4 simulates the climatology of precipitation with dry bias and underestimates the temperature in relation to CRU analysis.
Water 2018, 10, x FOR PEER REVIEW 5 of 13 analyzing the surface energy budget, which determines the amount of energy available to evaporate the surface water and to raise or lower temperature [38], and it can be defined as in Equation ( 1): (1) The results section shows that precipitation presents a dipole pattern over the western and eastern AMZ basin (see in Figure 1b the location of these areas) in the LUC experiment.To better understand this pattern, the water balance in the atmospheric column was also calculated, i.e., the change of atmospheric water vapor storage over the western and eastern AMZ, using the formulation presented by [39], as follows in Equation ( 2): where dw/dt represents the water stock change (mm day −1 ), P is the precipitation (mm day −1 ), ET is the evapotranspiration (mm day −1 ), and C is the vertically integrated moisture flux convergence (mm day −1 ) between 925 and 100 hPa.C is calculated as follow (Equation ( 3)): where q and V are, respectively: air specific humidity and horizontal wind vector.In Equation ( 2), can be ignored for longer periods, i.e., greater than a month [39], then Equation (3) to estimate water balance can be reduced as follows in Equation ( 4): Using the surface energy and water budgets can discriminate whether the precipitation change signal is affected by land-atmosphere feedback (i.e., evapotranspiration) or by large scale circulation patterns, such as the moisture transport from the Atlantic Ocean to the Amazon basin, or even if the precipitation signal is affected by both.
Validation of the Simulated Climatology in CTRL
Figure 2 shows the annual mean precipitation and air temperature at 2 m for both the CRU and CTRL simulation.The climatology of precipitation simulated by the CTRL experiment presents dry bias over the AMZ basin, while wet bias occurs over the west coast and southeast of SA and northeast Brazil (Figure 2a,b).Concerning air temperature, the CTRL experiment is colder than observations over the center-north SA (Figure 2c,d).In summary, over the AMZ basin RegCM4 simulates the climatology of precipitation with dry bias and underestimates the temperature in relation to CRU analysis.
Effects of Deforestation
In this section, a comparison is presented between the experiments replacing the tropical rain forest with C3 grass over the Amazon region.The LUC experiment presents higher annual mean air temperature over north-west SA (Figure 3a).Warming at low levels of the atmosphere contributes to decreasing the surface pressure and, as a consequence, it develops a thermal low (Figure 3b).This signal was also found by other deforestation studies over the AMZ, such as [24,25].
Air temperature and albedo (Figure 4a,b) increased, respectively, until 2.5 °C and 0.1 (~10%) over the deforested area as shown by the cross section in Figure 1a.The albedo increased because the tropical forest was replaced by C3 grass, and it is more accentuated between 60° and 50° W. In previous works, Culf, A.D. et al. [5] and Eltahir, E.A.B. [6] also showed that deforested areas have higher albedo than forests.Moreover, Eltahir, E.A.B. [6] mentioned that the surface net radiation (Rn) over cleared areas is smaller than over areas with no deforestation.Figure 4c presents lower Rn (red line) in LUC than in CTRL in the western sector of the AMZ basin, which may indicate an increase in low cloud cover that reduces the incoming solar radiation at the surface.
The physical mechanism associated with higher air temperature in the LUC experiment is a reduction of the latent heat flux (blue line, Figure 4c) and therefore there is an increase in the sensible heat flux (green line, Figure 4c).Changes in the vegetation modify photosynthesis and impact the transpiration.C3 grass transpires less than tropical forest, and it changes the energy budget at the surface, i.e., less energy is used to evaporate (decrease in the latent heat flux) and more energy is used to warm the atmosphere above the surface (increase in the sensible heat flux).Another impact of replacing forest with grass is the increase of ground heat flux (Figure 4c, purple line), once the radiation intercepted by the canopy is reduced in the LUC experiment.
The higher temperatures in the LUC experiment also reflect in the geopotential height (Figure 4d), where the negative values at low levels characterize the thermal low already seen in the pressure field (Figure 3b).Over the western side of the AMZ basin (Figure 1b), where the precipitation is reduced in the LUC (Figure 5), there are positive values in geopotential height from the 900 hPa to the 100 hPa layer, with a maximum near 700 hPa, which might be associated with adiabatic heating by subsidence due to deforestation [24].
Effects of Deforestation
In this section, a comparison is presented between the experiments replacing the tropical rain forest with C 3 grass over the Amazon region.The LUC experiment presents higher annual mean air temperature over north-west SA (Figure 3a).Warming at low levels of the atmosphere contributes to decreasing the surface pressure and, as a consequence, it develops a thermal low (Figure 3b).This signal was also found by other deforestation studies over the AMZ, such as [24,25].Precipitation (Figure 5a) shows a dipole response with a decrease over the western AMZ and an increase over the eastern AMZ.A dipole pattern in precipitation was also found by other simulations using RCM to understand deforestation over the Amazon basin.For example, Silva, M.E.S. et al. [25] obtained a dipole oriented from the northwest (decrease of rainfall) to the southeast (increase of rainfall) direction.Figure 5b accounts only convective precipitation.Over the AMZ, the total precipitation comes mainly from convective process, but also depends on the convection parameterization used in this study.The precipitation of the east-west dipole pattern is also evident in the cross section in Figure 4g, for both the total (red line) and convective precipitation (blue line).
Total evapotranspiration (purple line, Figure 4e) decreases in the LUC experiment.Changing tropical rain forest to C3 grass reduces the transpiration and evapotranspiration (green and red lines, Figure 4e) and increases the ground evaporation (blue line, Figure 4e).It is in agreement with the soil moisture behavior (Figure 4f) that decreases at the surface layer of 10 cm thickness (a very narrow brown layer in Figure 4f) and can be associated with the ground evaporation (blue line, Figure 4e).The deeper soil is up to 2 mm drier (brown color) at 70° W and between 44-40° W in the LUC experiment, which is not significant compared with the areas (green color) showing where the LUC experiment is wetter than the CTRL (Figure 4f).This behavior may be associated with the soil response to precipitation.Air temperature and albedo (Figure 4a,b) increased, respectively, until 2.5 • C and 0.1 (~10%) over the deforested area as shown by the cross section in Figure 1a.The albedo increased because the tropical forest was replaced by C 3 grass, and it is more accentuated between 60 • and 50 • W. In previous works, Culf, A.D. et al. [5] and Eltahir, E.A.B. [6] also showed that deforested areas have higher albedo than forests.Moreover, Eltahir, E.A.B. [6] mentioned that the surface net radiation (Rn) over cleared areas is smaller than over areas with no deforestation.Figure 4c presents lower Rn (red line) in LUC than in CTRL in the western sector of the AMZ basin, which may indicate an increase in low cloud cover that reduces the incoming solar radiation at the surface.
The physical mechanism associated with higher air temperature in the LUC experiment is a reduction of the latent heat flux (blue line, Figure 4c) and therefore there is an increase in the sensible heat flux (green line, Figure 4c).Changes in the vegetation modify photosynthesis and impact the transpiration.C 3 grass transpires less than tropical forest, and it changes the energy budget at the surface, i.e., less energy is used to evaporate (decrease in the latent heat flux) and more energy is used to warm the atmosphere above the surface (increase in the sensible heat flux).Another impact of replacing forest with grass is the increase of ground heat flux (Figure 4c, purple line), once the radiation intercepted by the canopy is reduced in the LUC experiment.
The higher temperatures in the LUC experiment also reflect in the geopotential height (Figure 4d), where the negative values at low levels characterize the thermal low already seen in the pressure field (Figure 3b).Over the western side of the AMZ basin (Figure 1b), where the precipitation is reduced in the LUC (Figure 5), there are positive values in geopotential height from the 900 hPa to the 100 hPa layer, with a maximum near 700 hPa, which might be associated with adiabatic heating by subsidence due to deforestation [24].
total precipitation comes mainly from convective process, but also depends on the convection parameterization used in this study.The precipitation of the east-west dipole pattern is also evident in the cross section in Figure 4g, for both the total (red line) and convective precipitation (blue line).
Total evapotranspiration (purple line, Figure 4e) decreases in the LUC experiment.Changing tropical rain forest to C3 grass reduces the transpiration and evapotranspiration (green and red lines, Figure 4e) and increases the ground evaporation (blue line, Figure 4e).It is in agreement with the soil moisture behavior (Figure 4f) that decreases at the surface layer of 10 cm thickness (a very narrow brown layer in Figure 4f) and can be associated with the ground evaporation (blue line, Figure 4e).The deeper soil is up to 2 mm drier (brown color) at 70° W and between 44-40° W in the LUC experiment, which is not significant compared with the areas (green color) showing where the LUC experiment is wetter than the CTRL (Figure 4f).This behavior may be associated with the soil response to precipitation.Precipitation (Figure 5a) shows a dipole response with a decrease over the western AMZ and an increase over the eastern AMZ.A dipole pattern in precipitation was also found by other simulations using RCM to understand deforestation over the Amazon basin.For example, Silva, M.E.S. et al. [25] obtained a dipole oriented from the northwest (decrease of rainfall) to the southeast (increase of rainfall) direction.Figure 5b accounts only convective precipitation.Over the AMZ, the total precipitation comes mainly from convective process, but also depends on the convection parameterization used in this study.The precipitation of the east-west dipole pattern is also evident in the cross section in Figure 4g, for both the total (red line) and convective precipitation (blue line).
Total evapotranspiration (purple line, Figure 4e) decreases in the LUC experiment.Changing tropical rain forest to C 3 grass reduces the transpiration and evapotranspiration (green and red lines, Figure 4e) and increases the ground evaporation (blue line, Figure 4e).It is in agreement with the moisture behavior (Figure 4f) that decreases at the surface layer of 10 cm thickness (a very narrow brown layer in Figure 4f) and can be associated with the ground evaporation (blue line, Figure 4e).The deeper soil is up to 2 mm drier (brown color) at 70 • W and between 44-40 • W in the LUC experiment, which is not significant compared with the areas (green color) showing where the LUC experiment is wetter than the CTRL (Figure 4f).This behavior may be associated with the soil response to precipitation.Changes in land use (from forest to grass) reduce surface roughness and increase the thermal gradient between the tropical Atlantic Ocean and the continent, with a consequent intensification of winds at low levels (850 hPa), mainly on the eastern AMZ (Figure 6a).Southward of 20 o , the eastern Andes low level jet is weakened, which may reduce the moisture flux transport from the north to southeastern SA.Above the low levels at the thermal low (Figure 3b), there is an anticyclonic anomaly in 250 hPa (Figure 6b).
Water Balance: CTRL versus LUC experiments
To better understand the precipitation change signal over the AMZ, the climatology of the water balance in the atmospheric column was calculated, using Equation ( 4), for both boxes over the AMZ region, shown in Figure 5. Table 1 presents the mean annual (from 1980-2009) values of the total precipitation, evapotranspiration and the vertically integrated moisture flux convergence over the western and eastern AMZ.
Over the western AMZ, precipitation and evapotranspiration were higher in the CTRL than in the LUC experiment, and the differences were 0.58 and 0.50 mm day −1 , respectively.Although LUC Changes in land use (from forest to grass) reduce surface roughness and increase the thermal gradient between the tropical Atlantic Ocean and the continent, with a consequent intensification of winds at low levels (850 hPa), mainly on the eastern AMZ (Figure 6a).Southward of 20 o , the eastern Andes low level jet is weakened, which may reduce the moisture flux transport from the north to southeastern SA.Above the low levels at the thermal low (Figure 3b), there is an anticyclonic anomaly in 250 hPa (Figure 6b).Changes in land use (from forest to grass) reduce surface roughness and increase the thermal gradient between the tropical Atlantic Ocean and the continent, with a consequent intensification of winds at low levels (850 hPa), mainly on the eastern AMZ (Figure 6a).Southward of 20 o , the eastern Andes low level jet is weakened, which may reduce the moisture flux transport from the north to southeastern SA.Above the low levels at the thermal low (Figure 3b), there is an anticyclonic anomaly in 250 hPa (Figure 6b).
Water Balance: CTRL versus LUC experiments
To better understand the precipitation change signal over the AMZ, the climatology of the water balance in the atmospheric column was calculated, using Equation ( 4), for both boxes over the AMZ region, shown in Figure 5. Table 1 presents
Water Balance: CTRL vs. LUC Experiments
To better understand the precipitation change signal over the AMZ, the climatology of the water balance in the atmospheric column was calculated, using Equation (4), for both boxes over the AMZ region, shown in Figure 5. Table 1 presents the mean annual (from 1980-2009) values of the total precipitation, evapotranspiration and the vertically integrated moisture flux convergence over the western and eastern AMZ.
Table 1.Annual mean of water balance components: precipitation (P, day −1 ), evapotranspiration (ET, mm day −1 ) and convergence of moisture flux (C, mm day −1 ), calculated for western and eastern AMZ boxes showed in Figure 5.Over the western AMZ, precipitation and evapotranspiration were higher in the CTRL than in the LUC experiment, and the differences were 0.58 and 0.50 mm day −1 , respectively.LUC presented a slightly higher convergence of moisture flux (−2.40 mm day −1 ) than CTRL (−2.20 mm day −1 ), it was not enough to oppose the reduced evapotranspiration.In this sense, the CTRL has 0.30 mm day −1 more moisture in the atmosphere than LUC, explaining why on this side of the basin simulated precipitation decreased under the deforestation scenario.Therefore, in this region the land-atmosphere coupling plays an important role in the control of the convective precipitation.
Water Balance Components
On the other hand, over the eastern AMZ there was an increase in moisture flux convergence in the LUC (1.10 mm day −1 ) compared to the CTRL, which was higher than the reduction in evapotranspiration (0.60 mm day −1 ).This excess of 0.5 mm day −1 of moisture in LUC may justify the larger rate of precipitation in the eastern AMZ.The latter result is driven by the convergence of moisture flux related to the intensification of winds at low levels (Figure 6a) transporting moisture from the tropical Atlantic Ocean to the continent.
In summary, over the western AMZ and under a deforestation scenario, there is a strong feedback between land and atmosphere which impacts the precipitation (decreases of rainfall), while large scale climate patterns (moisture flux convergence) drive the precipitation over the eastern side of the AMZ.The values shown in Table 1 can be smaller/higher once the CTRL simulation presents some biases, as discussed in Section 3.1.
Conclusions
The Amazon basin is a highly vulnerable area to climate change due to modifications of the hydroclimatology of the region expected as a result of LUC.In order to understand how the change of broadleaf evergreen trees (tropical rain forest) to C 3 grass over the AMZ impacts the water and energy budgets, two simulations with RegCM4, for the period 1979 to 2009, were analyzed.The climate change signal due to AMZ deforestation was evaluated by comparing the climatology of CTRL with LUC (land use change) experiments.
Numerical experiments indicate that AMZ deforestation is associated with an increase of about 2 • C in air temperature and sensible heat fluxes, a decrease in latent heat flux, and a precipitation dipole pattern over tropical SA.
As a result of the change of the AMZ forest to grass, transpiration is reduced and hence less rainfall all over the region would be expected.However, the LUC experiment showed a dipole pattern of dry and wet conditions in the western and eastern Amazon basin, respectively.The water balance in the atmospheric column in the western AMZ presented higher values of evapotranspiration in the CTRL than the LUC experiment, explaining why precipitation is higher in the CTRL.This shows that land-atmosphere coupling is important to control the rainfall in the western side of the basin.In the eastern side, the higher values of moisture flux convergence, due the intensification of northeastern winds, is the main feature explaining the higher rainfall amount in the LUC experiment.
Under the deforestation scenario in the AMZ, the simulated dipole pattern in rainfall was driven by land-atmosphere feedback (evapotranspiration) in the west and large scale feedback (convergence of moisture flux) in the east side of the basin.These results are in agreement with an observational study [11] showing that the total amount of water that precipitates on large continental regions is supplied by local (evapotranspiration) and remote areas (advection from the surrounding areas).This work contributes to better understanding the effect on the climate over the Amazon basin under a scenario of deforestation.In a future work, it will be important to use a large number of members or different RCMs to better address the uncertainty in the simulated climate related to Amazon deforestation.
Figure 1 .
Figure 1.(a) Green areas represent the plant functional type of tropical rain forest that was replaced by C3 grass in the LUC simulation and the two horizontal black lines represent the cross section (5° S-5° N) selected for deeper analysis; (b) South America simulation domain and topography (m).Rectangles in (b) indicate the western (left) and eastern (right) sides of the AMZ.
Figure 1.(a) Green areas represent the plant functional type of tropical rain forest that was replaced by C 3 grass in the LUC simulation and the two horizontal black lines represent the cross section (5 • S-5 • N) selected for deeper analysis; (b) South America simulation domain and topography (m).Rectangles in (b) indicate the western (left) and eastern (right) sides of the AMZ.
13 Figure 3 .
Figure 3. (a) Air temperature change, in °C, and (b) surface pressure change, in hPa, for LUC minus CRTL simulation.
Figure 3 .
Figure 3. (a) Air temperature change, in • C; and (b) surface pressure change, in hPa, for LUC minus CRTL simulation.
Water 2018 ,
10, x FOR PEER REVIEW 8 of 13
Figure 4 .
Figure 4. Cross section (5° S-5° N): (a) change in air temperature (°C); (b) change in albedo; (c) change in the energy budget components (w m −2 ): net radiation (red line), sensible heat flux (green line), latent heat flux (blue line) and soil heat flux (purple line); (d) change in geopotential height (m); (e) change in the evapotranspiration components (mm day −1 ): ground evaporation in blue, canopy transpiration in red, canopy evaporation in green and total evapotranspiration in purple; (f) change in soil moisture (mm) and (g) change in precipitation (mm day −1 ): total precipitation in red and convective precipitation in blue.
Figure 4 .
Figure 4. Cross section (5 • S-5 • N): (a) change in air temperature ( • C); (b) change in albedo; (c) change in the energy budget components (w m −2 ): net radiation (red line), sensible heat flux (green line), latent heat flux (blue line) and soil heat flux (purple line); (d) change in geopotential height (m); (e) change in the evapotranspiration components (mm day −1 ): ground evaporation in blue, canopy transpiration in red, canopy evaporation in green and total evapotranspiration in purple; (f) change in soil moisture (mm) and (g) change in precipitation (mm day −1 ): total precipitation in red and convective precipitation in blue.
Water 2018 , 13 Figure 5 .
Figure 5. Precipitation change (LUC minus CTRL simulation) in mm day −1 for (a) total precipitation and (b) convective precipitation.Rectangles indicate the western (left) and eastern (right) sides of Amazon.
Figure 5 .
Figure 5. Precipitation change (LUC minus CTRL simulation) in mm day −1 for (a) total precipitation and (b) convective precipitation.Rectangles indicate the western (left) and eastern (right) sides of Amazon.
13 Figure 5 .
Figure 5. Precipitation change (LUC minus CTRL simulation) in mm day −1 for (a) total precipitation and (b) convective precipitation.Rectangles indicate the western (left) and eastern (right) sides of Amazon.
the mean annual (from 1980-2009) values of the total precipitation, evapotranspiration and the vertically integrated moisture flux convergence over the western and eastern AMZ. | 2018-05-15T18:05:41.559Z | 2018-02-03T00:00:00.000 | {
"year": 2018,
"sha1": "3206e3dd4bc87630abd7d158f3a64922768a9d2d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/2/149/pdf?version=1517648997",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "77fea22475a2f034faa487bd67f7d0466a9a2eec",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
253416900 | pes2o/s2orc | v3-fos-license | Clustering-Driven DGS-Based Micro-Doppler Feature Extraction for Automatic Dynamic Hand Gesture Recognition
We propose in this work a dynamic group sparsity (DGS) based time-frequency feature extraction method for dynamic hand gesture recognition (HGR) using millimeter-wave radar sensors. Micro-Doppler signatures of hand gestures show both sparse and structured characteristics in time-frequency domain, but previous study only focus on sparsity. We firstly introduce the structured prior when modeling the micro-Doppler signatures in this work to further enhance the features of hand gestures. The time-frequency distributions of dynamic hand gestures are first modeled using a dynamic group sparse model. A DGS-Subspace Pursuit (DGS-SP) algorithm is then utilized to extract the corresponding features. Finally, the support vector machine (SVM) classifier is employed to realize the dynamic HGR based on the extracted group sparse micro-Doppler features. The experiment shows that the proposed method achieved 3.3% recognition accuracy improvement over the sparsity-based method and has a better recognition accuracy than CNN based method in small dataset.
Introduction
Over the last decade, dynamic hand gesture recognition (HGR) has received an increasing research interests for human-machine interactions (HMI). It possesses great significance in a number of short-range contactless applications [1][2][3][4][5][6]. Getting rid of the physically contact sensors required in traditional electromyography (EMG)-based or glove-based HGR tasks [7][8][9] brings many benefits, including accessibility to all potential users (healthy persons, patients with limited mobility, those allergic to contact sensors), convenience of long-term monitoring (enabling automatic stop and go detection), mobility and flexibility of deployment (adaptation to all environmental and lighting conditions).
Various schemes have been proposed for non-contact dynamic hand gesture recognition, such as optical sensors [10][11][12], acoustic sensors [13], Wi-Fi [14,15], and radar-based methods [16]. Radar-based HGR has attracted a considerable attention and tremendous progress has been made since it can work in all lighting conditions, even in penetrating condition, and with a privacy preserving manner. The micro-Doppler features extracted from the spectrograms obtained using the short-time Fourier transform (STFT) analysis, are often utilized to characterize different hand gestures. In addition, some studies used wideband radar or multi-antenna radar systems to obtain distance or angle information [17][18][19]. Introducing more perspectives improves the ability of recognizing gestures in certain scenarios, such as using angle information to distinguish between flapping the hand to left and right or rotating the hand clockwise and counterclockwise. Radar-based HGR task differs from the arm motion recognition [20,21], which has also grown rapidly in recent years. Arm motions are performed with the joint participation of the upper arm, the lower arm and the palm. With the involvement of the upper arm, a quite pronounced time-frequency distribution can be achieved due to the wide motion spreading range, rapid velocity changes, and large radar cross section (RCS) of the upper arm. Unlike arm motions, hand gestures involve the motions of the fingers, the palm and the lower arm. The motion expansion, the speed and the RCS of the hand gestures are much smaller than those of the arm motions. As a result, the motion features of hand gestures are strongly attenuated and degenerated, which make the recognition much more difficult than that of the arm motions. Thus, there is an urgent need to investigate more effective ways of conducting feature extraction and enhancement for hand gesture recognition.
The lower dimensional features are attractive for recognition due to only small amount of data needed for classifier in comparison with the methods based on neural network. In refs. [22][23][24][25], various handcrafted features are extracted from time-frequency maps and used for HGR. Eigenspace features are also commonly used for HGR [26]. In ref. [3], application-specific features extracted using principal component analysis (PCA) were utilized to recognize dynamic hand gestures. The sparse reconstruction-based feature extraction approach has also achieved good performance and proven to be effective to handle with the gesture recognition task [27][28][29]. However, the approach only considered the fully sparse property. In fact, the micro-Doppler signatures of hand gestures exhibit a more important feature, that is the local clustering.
In this paper, in the aim of further enhancing hand motion features and improving the recognition performance, we proposed a novel strategy to jointly consider the sparsity and clustering properties of the micro-Doppler signatures and used a dynamic group sparsity (DGS) model [30] to extract corresponding features. Firstly, the relationship between the radar echoes of hand gestures and their corresponding micro-Doppler signatures are established using a time-frequency dictionary. Secondly, the micro-Doppler features are modeled using structured priors and extracted using the DGS-Subspace Pursuit (DGS-SP) algorithm [30]. Then, the features are fed into an SVM classifier. At last, experiments with data collected by a 24 GHz continuous wave (CW) radar are carried out to verify the efficacy of the proposed method. The results demonstrate that the structured feature is beneficial to improve the accuracy of dynamic hand gesture recognition.
The remainder of this paper is organized as follows. In Section 2, the DGS-based structured sparse model and the DGS-SP-based feature extraction algorithm are detailed. In Section 3, the dynamic hand gesture experiments are implemented, and the recognition accuracy is presented to verify the effectiveness of the proposed method in comparison with the sparse only method and the convolutional neural network (CNN) method. Section 4 summarizes the paper.
Micro-Doppler Signatures of Dynamic Hand Gesture
Time-frequency analysis is the most common approach to conduct motion recognition task. Usually, the short-time Fourier transform (STFT) is applied to process the radar records so as to obtain the time-frequency representation, where s(·) represents the demodulated echo data, n = 0, · · · , N − 1 denotes the time index, k = 0, · · · , K − 1 is the discrete frequency index, h(·) is a Hanning window with length L. An example spectrogram of flipping fingers is illustrated in Figure 1. The length of Hanning window is set to 64 (0.064 s), the overlap of two consecutive windows is 63, and K is set to 256.
where (•) represents the demodulated echo data, = 0, ⋯ , − 1 denotes the time index, = 0, ⋯ , − 1 is the discrete frequency index, ℎ(•) is a Hanning window with length . An example spectrogram of flipping fingers is illustrated in Figure 1. The length of Hanning window is set to 64 (0.064 s), the overlap of two consecutive windows is 63, and is set to 256. It can be clearly observed that most parts of the spectrogram are populated by background noises with relatively weak energy. Only in several localized and concentrated regions does the spectrogram possess stronger energy. The observation in fact states that the time-frequency distribution of dynamic hand gestures presents not merely completely sparseness features as discussed in [27]. More precisely, it exhibits obvious local clustering characteristics, which can be more perfectly approximated by structured sparsity models.
It has been well studied and proven that sparsity based micro-Doppler features extraction methods can be of great benefit for HGR task in limited dataset scenarios [27,28]. However, no related work so far was reported to consider the clustering nature of the spectrograms of dynamic hand gestures.
Sparsity Model of Dynamic Hand Gesture
Firstly, we present the definition of a -sparse signal. If a signal ∈ can be approximated by ≪ non-zero coefficients under certain transformation, the signal is a called a -sparse signal. Compressed sensing (CS) theory states that if the signal is sparse in a certain domain, the original signal can be accurately recovered using sparse reconstruction technique with a reduced observation [25].
Under complete sparsity hypothesis, by denoting the raw radar echoes of dynamic hand gestures as ∈ , the following sparse representation of in time-frequency domain holds, = where ∈ * represents a time-frequency dictionary, ∈ is a sparse vector. The above model states that the radar echo of dynamic hand gesture can be approximated using linear superposition of a series of basis signals, which can have various forms. This paper adopts the Gaussian-windowed Fourier basis signal [31], which can be expressed as, where and stand for the time and frequency shift of the basis signal, respectively, and denotes the variance of the Gaussian window, namely, the scaling factor. And It can be clearly observed that most parts of the spectrogram are populated by background noises with relatively weak energy. Only in several localized and concentrated regions does the spectrogram possess stronger energy. The observation in fact states that the time-frequency distribution of dynamic hand gestures presents not merely completely sparseness features as discussed in ref. [27]. More precisely, it exhibits obvious local clustering characteristics, which can be more perfectly approximated by structured sparsity models.
It has been well studied and proven that sparsity based micro-Doppler features extraction methods can be of great benefit for HGR task in limited dataset scenarios [27,28]. However, no related work so far was reported to consider the clustering nature of the spectrograms of dynamic hand gestures.
Sparsity Model of Dynamic Hand Gesture
Firstly, we present the definition of a K-sparse signal. If a signal x ∈ C M can be approximated by K M non-zero coefficients under certain transformation, the signal is a called a K-sparse signal. Compressed sensing (CS) theory states that if the signal is sparse in a certain domain, the original signal can be accurately recovered using sparse reconstruction technique with a reduced observation [25].
Under complete sparsity hypothesis, by denoting the raw radar echoes of dynamic hand gestures as y ∈ C N , the following sparse representation of y in time-frequency domain holds, where Φ ∈ C N * M represents a time-frequency dictionary, x ∈ C M is a sparse vector. The above model states that the radar echo of dynamic hand gesture can be approximated using linear superposition of a series of basis signals, which can have various forms. This paper adopts the Gaussian-windowed Fourier basis signal [31], which can be expressed as, where t m and f m stand for the time and frequency shift of the basis signal, respectively, and σ denotes the variance of the Gaussian window, namely, the scaling factor. And n = 1, · · · , N is the time shift index, while m = 1, · · · , M denotes the frequency shift index. The parameter σ is usually selected based on experience. Here, we set it to be 16. The values of t m and f m are empirically set to be {0.25σ, 0.5σ, 0.75σ, · · · , 0.25σ × N/(0.25σ)} and 1π 4σ , 2π 4σ , 3π 4σ , · · · , 2π , respectively, where · denotes a rounding down operation [31]. According to the theory of CS [32,33], for a K-sparse signal x, if K N < M holds, the sparse time-frequency distribution x in Equation (2) can be recovered by, where · 0 and · 2 denote the L 0 and L 2 norms, respectively. Equation (4) can be effectively solved using many algorithms [31,34,35]. Once the sparse coefficient is obtained, the raw radar echo of dynamic hand gestures can be approximated as follows, The time-frequency distribution of the reconstructed radar echoŷ is shown in Figure 2, with K set to be 24. Compared with the raw spectrogram in Figure 1, it is obvious that the position and strength of the dominant time-frequency components are well preserved and highlighted. However, the noises are not removed perfectly. The reason is that the complete sparsity assumption is valid not only for time-frequency signatures of the hand gesture, but also for the noise components. Therefore, it is unavoidable that part of the noise components would be recovered as well in the reconstructed spectrogram. , ⋯ ,2 }, respectively, where ⌊•⌋ denotes a rounding down operation [31].
According to the theory of CS [32,33], for a -sparse signal , if K ≪ N < M holds, the sparse time-frequency distribution in Equation (2) can be recovered by, where ‖•‖ 0 and ‖•‖ 2 denote the 0 and 2 norms, respectively. Equation (4) can be effectively solved using many algorithms [31,34,35]. Once the sparse coefficient is obtained, the raw radar echo of dynamic hand gestures can be approximated as follows, The time-frequency distribution of the reconstructed radar echo ̂ is shown in Figure 2, with set to be 24. Compared with the raw spectrogram in Figure 1, it is obvious that the position and strength of the dominant time-frequency components are well preserved and highlighted. However, the noises are not removed perfectly. The reason is that the complete sparsity assumption is valid not only for time-frequency signatures of the hand gesture, but also for the noise components. Therefore, it is unavoidable that part of the noise components would be recovered as well in the reconstructed spectrogram.
Dynamic Group Sparsity Model of Dynamic Hand Gesture
In fact, the time-frequency distribution of dynamic hand gesture shows obvious clustering property while the noises tend to arbitrarily spread throughout the spectrogram. Meanwhile, the pattern of the cluster is not limited to any specific structure. Thus, we remodel the time-frequency distribution of dynamic hand gesture using a dynamic group sparsity model with more flexibility, in which an element surrounded by non-zero elements has a higher probability of being non-zero, and vice versa.
The dynamic group sparsity signal can be defined as follows: if a signal ∈ can be approximated by ≪ non-zero coefficients under some linear transforms and these nonzero coefficients are clustered into ∈ {1,2, ⋯ , } groups, the signal is called a dynamic , -sparse signal [30]. In this work, the group sparse representation of in time-frequency domain is expressed as follows, where , ∈ is a group sparse vector. An effective algorithm, called Dynamic Group Sparsity-Subspace Pursuit (DGS-SP) [30], can be used to recover the above , -sparse signal, which is expressed as,
Dynamic Group Sparsity Model of Dynamic Hand Gesture
In fact, the time-frequency distribution of dynamic hand gesture shows obvious clustering property while the noises tend to arbitrarily spread throughout the spectrogram. Meanwhile, the pattern of the cluster is not limited to any specific structure. Thus, we remodel the time-frequency distribution of dynamic hand gesture using a dynamic group sparsity model with more flexibility, in which an element surrounded by non-zero elements has a higher probability of being non-zero, and vice versa.
The dynamic group sparsity signal can be defined as follows: if a signal x ∈ C M can be approximated by K M non-zero coefficients under some linear transforms and these K nonzero coefficients are clustered into q ∈ {1, 2, · · · , K} groups, the signal is called a dynamic G K,q -sparse signal [30]. In this work, the group sparse representation of y in time-frequency domain is expressed as follows, where x K,q ∈ C M is a group sparse vector. An effective algorithm, called Dynamic Group Sparsity-Subspace Pursuit (DGS-SP) [30], can be used to recover the above G K,q -sparse signal, which is expressed as,x where β denotes the weights of the neighbors. The most important feature of the DGS-SP algorithm is the introduction of a unique pruning process in the iteration, as described in Algorithm 1 below. Firstly, for the vector v needed to be pruned, the neighbor indices of each element are calculated. Then the neighbors are weighted and summed according to the weight coefficient β, and the result is recorded as z. The first K maximum values in z are taken as the pruned vector. After that, embedding the DGS pruning into the Subspace Pursuit (SP) algorithm results in the so-called DGS-SP algorithm. The pseudocodes of the DGS pruning and the DGS-SP algorithm are described in Algorithms 1 and 2, respectively. Different neighboring structures, namely the structured priors, can be adopted when conducting the pruning, as shown in Figure 3 [36,37], which comes with different recognition performance. It will be detailed in Section 4. where denotes the weights of the neighbors. The most important feature of the DGS-SP algorithm is the introduction of a unique pruning process in the iteration, as described in Algorithm 1 below. Firstly, for the vector needed to be pruned, the neighbor indices of each element are calculated. Then the neighbors are weighted and summed according to the weight coefficient , and the result is recorded as . The first maximum values in are taken as the pruned vector. After that, embedding the DGS pruning into the Subspace Pursuit (SP) algorithm results in the so-called DGS-SP algorithm. The pseudocodes of the DGS pruning and the DGS-SP algorithm are described in Algorithm 1 and Algorithm 2, respectively. Different neighboring structures, namely the structured priors, can be adopted when conducting the pruning, as shown in Figure 3 [36,37], which comes with different recognition performance. It will be detailed in Section 4. (1) the residual 0 = ; (2) the solution support 0 = ∅;
Input:
Signal v ∈ R M , sparsity K, weights for neighbors β Output: solution support supp{v, K} Step: (1) compute the index N x ∈ R M * I of the corresponding neighbors, I is equal to the number of non-zero elements in β; (2) compute the weights w = [β 1 , β 2 , · · · , β I ], , end for (4) let supp{v, K} be the index corresponding to the first K maximum values in z.
The time-frequency distribution of the reconstructed radar echoŷ using DGS-SP algorithm is shown in Figure 4. The group sparsity level is set to 24, which is consistent with the level in Section 2.2. The third structure as shown in Figure 3c is selected as the neighboring structure. As compared to the results in Figure 2, it is obvious that the micro-Doppler signatures of flipping fingers are well recovered while the noise components being significantly suppressed, which indicate that the proposed approach possesses better noise-isolation performance than the OMP algorithm. (11) update = −̂; (12) if ‖ ‖ 2 ≥ ‖ −1 ‖ 2 , quit the iteration.
The time-frequency distribution of the reconstructed radar echo ̂ using DGS-SP algorithm is shown in Figure 4. The group sparsity level is set to 24, which is consistent with the level in Section 2.2. The third structure as shown in Figure 3c is selected as the neighboring structure. As compared to the results in Figure 2, it is obvious that the micro-Doppler signatures of flipping fingers are well recovered while the noise components being significantly suppressed, which indicate that the proposed approach possesses better noise-isolation performance than the OMP algorithm.
Feature Extraction of Dynamic Hand Gesture
It was detailed in Section 2.2 that the time-frequency distribution of the hand gesture echo can be approximated by a group of basis signals with parameter sets (| |, , ) , where | | denotes the intensity of the specific time-frequency cell ( , ). Thus, the sets in effect serve to be representative features directly related to the data content of different hand gestures. By extracting such the discrete parameter sets of the pre-designated -sparsity signal, the feature vectors can be formulated as below and utilized for subsequent hand gestures recognition.
Feature Extraction of Dynamic Hand Gesture
It was detailed in Section 2.2 that the time-frequency distribution of the hand gesture echo y can be approximated by a group of basis signals with parameter sets (|x ik |, t ik , f ik ), where |x ik | denotes the intensity of the specific time-frequency cell (t ik , f ik ). Thus, the sets in effect serve to be representative features directly related to the data content of different hand gestures. By extracting such the discrete parameter sets of the pre-designated Ksparsity signal, the feature vectors can be formulated as below and utilized for subsequent hand gestures recognition. f (y) = (t i1 , · · · , t iK , f i1 , · · · f iK , · · · , |x i1 |, · · · |x iK |) Then, we turn to the recovered spectrograms for a visually intuitive comparison of the distribution of the extracted features using the proposed method and the OMP-based method. In Figure 5, the white triangles in the spectrogram represents the time-frequency locations (t ik , f ik ) of the extracted feature vector. As can be observed from the results, the features selected by DGS-SP algorithm are more focused around the major micro-Doppler signatures than that of the OMP method. Since the major micro-Doppler signatures contribute to the discrimination of different hand gestures it implies that the extracted feature vectors using the proposed approach can achieve better hand gesture recognition performance. Then, we turn to the recovered spectrograms for a visually intuitive comparison of the distribution of the extracted features using the proposed method and the OMP-based method. In Figure 5, the white triangles in the spectrogram represents the time-frequency locations ( , ) of the extracted feature vector. As can be observed from the results, the features selected by DGS-SP algorithm are more focused around the major micro-Doppler signatures than that of the OMP method. Since the major micro-Doppler signatures contribute to the discrimination of different hand gestures it implies that the extracted feature vectors using the proposed approach can achieve better hand gesture recognition performance.
Data Collection and Feature Extraction
The dataset of dynamic hand gestures utilized in this paper was collected using a software defined radar, SDR-KIT 2400T2R4, developed by Ancortek Inc., USA [38]. The
Data Collection and Feature Extraction
The dataset of dynamic hand gestures utilized in this paper was collected using a software defined radar, SDR-KIT 2400T2R4, developed by Ancortek Inc., USA [38]. The platform is composed of two transmitters and four receivers. It can work either in frequency modulated continuous wave (FMCW) mode with the operating frequency ranging from 24 GHz to 26 GHz or in a single-tone CW mode with the frequency fixed at any intermediate value. We in this work used a laptop connected to the Ancortek millimeter wave radar to record and process the radar echoes of hand gestures. And the Python programming language is employed to implement all related signal processing method.
In the experiment, we use only one tranceiving antenna pair to collect the scattered data. The data acquisition setup and the schematic diagrams of four dynamic hand gestures are illustrated in Figure 6. The radar system operates at 24 GHz with a sampling frequency of 1 kHz. The separation between the antenna front and the human hand is about 0.3 m. In total, four types of hand gestures are considered, including (a) snapping fingers, (b) flipping fingers, (c) clenching hand, and (d) clicking fingers. Four human subjects were recruited to conduct the experiment. Each of them repeated the four gestures for 25 times. The duration of radar recording lasts for 15 s. Thus, a complete dynamic hand gesture cycle is about 0.6 s. In this way, we can obtain 400 records of four hand gestures, 100 for each. Then we use the same parameters as introduced in Section 2.1 to calculate the spectrograms of the four types of dynamic hand gestures, and the results are shown in Figure 7. The reconstructed spectrograms and the feature vectors are shown in Figure 8, by using the OMP method and the proposed method, respectively, with the same parameters described in Section 2. The results show that the DGS-SP method, by modeling clustering characteristics of spectrograms of hand gestures using structured prior, has better performance in extracting key hand motion information and suppressing noises in spectrogram. The reconstructed spectrograms and the feature vectors are shown in Figure 8, by using the OMP method and the proposed method, respectively, with the same parameters described in Section 2. The results show that the DGS-SP method, by modeling clustering characteristics of spectrograms of hand gestures using structured prior, has better performance in extracting key hand motion information and suppressing noises in spectrogram.
(c) (d) The reconstructed spectrograms and the feature vectors are shown in Figure 8, by using the OMP method and the proposed method, respectively, with the same parameters described in Section 2. The results show that the DGS-SP method, by modeling clustering characteristics of spectrograms of hand gestures using structured prior, has better performance in extracting key hand motion information and suppressing noises in spectrogram.
Recognition Accuracy Using Different Classifiers
Next, we consider the recognition accuracy. Each record of dynamic hand gesture was processed using the proposed method for each sparsity level. By repeating the above process for all the collected data samples, a dataset of feature vectors of different dynamic hand gestures of different sparsity levels is constructed. Then, several traditional machine learning classifiers are separately employed for recognition, including the Decision Trees, Naïve Bayes Classifiers, support vector machine (SVM) and K-nearest neighbor (KNN). Eighty percent of the dataset is selected as the training set, and the rest is used as the testing set. One hundred Monte Carlo trials are performed to produce an average recognition accuracy for each sparsity level. The mean recognition accuracy for four types of hand gestures obtained using different classifiers under various sparsity levels are depicted in Figure 9. Clearly, SVM performs best for all the four types of hand gestures and thus is chosen as our classifier. Such a selection means that the maximum inter-class distance is obtained using the specific classifier, and thus it is more competent for the hand gesture recognition
Recognition Accuracy Using Different Classifiers
Next, we consider the recognition accuracy. Each record of dynamic hand gesture was processed using the proposed method for each sparsity level. By repeating the above process for all the collected data samples, a dataset of feature vectors of different dynamic hand gestures of different sparsity levels is constructed. Then, several traditional machine learning classifiers are separately employed for recognition, including the Decision Trees, Naïve Bayes Classifiers, support vector machine (SVM) and K-nearest neighbor (KNN). Eighty percent of the dataset is selected as the training set, and the rest is used as the testing set. One hundred Monte Carlo trials are performed to produce an average recognition accuracy for each sparsity level. The mean recognition accuracy for four types of hand gestures obtained using different classifiers under various sparsity levels are depicted in Figure 9.
Recognition Accuracy Using Different Classifiers
Next, we consider the recognition accuracy. Each record of dynamic hand gesture was processed using the proposed method for each sparsity level. By repeating the above process for all the collected data samples, a dataset of feature vectors of different dynamic hand gestures of different sparsity levels is constructed. Then, several traditional machine learning classifiers are separately employed for recognition, including the Decision Trees, Naïve Bayes Classifiers, support vector machine (SVM) and K-nearest neighbor (KNN). Eighty percent of the dataset is selected as the training set, and the rest is used as the testing set. One hundred Monte Carlo trials are performed to produce an average recognition accuracy for each sparsity level. The mean recognition accuracy for four types of hand gestures obtained using different classifiers under various sparsity levels are depicted in Figure 9. Clearly, SVM performs best for all the four types of hand gestures and thus is chosen as our classifier. Such a selection means that the maximum inter-class distance is obtained using the specific classifier, and thus it is more competent for the hand gesture recognition Clearly, SVM performs best for all the four types of hand gestures and thus is chosen as our classifier. Such a selection means that the maximum inter-class distance is obtained using the specific classifier, and thus it is more competent for the hand gesture recognition task when compared to other classifiers. The highest recognition accuracy rate reached 91.3% with sparsity level being set to 48.
Recognition Accuracy under Different Dynamic Group Structures
The neighboring structure plays a key role in the DGS-SP method, which is also the key to distinguish it from the OMP method. Different neighboring structures can result in different reconstruction results, which will ultimately affect the classification performance.
The results corresponding to different group structures (see Figure 3) are illustrated in Figure 10. Among the four different neighboring structures, the type (c) structure yields an overall highest recognition accuracy. The underlying rationality lies in the fact that the time-frequency distributions of hand gestures exhibit a more evident vertical expansion rather than lateral expansion due to the high sensing frequency and the short time duration when conducting hand gestures. Therefore, it is more desirable to apply such a neighboring structure with prominent lateral expansion to preserve the information related to the time dimension. Thus, the group structure (c) is suggested to be exploited to model the clustering nature of the spectrograms of the dynamic hand gestures.
Comparison with the OMP Method
The recognition performance of the proposed approach is given in this subsection, in comparison with the OMP-based method [27]. To give a more intuitively illustration, the reconstructed spectrograms of the snapping fingers under different sparsity levels using the DGS-SP approach and the OMP approach are first shown in Figure 11 with the selected feature vectors' coordinates highlighted using white triangles in the spectrograms. Among the four different neighboring structures, the type (c) structure yields an overall highest recognition accuracy. The underlying rationality lies in the fact that the time-frequency distributions of hand gestures exhibit a more evident vertical expansion rather than lateral expansion due to the high sensing frequency and the short time duration when conducting hand gestures. Therefore, it is more desirable to apply such a neighboring structure with prominent lateral expansion to preserve the information related to the time dimension. Thus, the group structure (c) is suggested to be exploited to model the clustering nature of the spectrograms of the dynamic hand gestures.
Comparison with the OMP Method
The recognition performance of the proposed approach is given in this subsection, in comparison with the OMP-based method [27]. To give a more intuitively illustration, the reconstructed spectrograms of the snapping fingers under different sparsity levels using the DGS-SP approach and the OMP approach are first shown in Figure 11 with the selected feature vectors' coordinates highlighted using white triangles in the spectrograms.
Then, quantitative comparison via evaluating the recognition accuracy by using 40% and 80% of the data as the training datasets are given in Figure 12. It can be found that the proposed method outperforms the OMP-based method when choosing the sparsity level larger than about 25, meaning that the proposed method is more robust with respect to the sparsity levels. The proposed method yields the highest recognition accuracy of 91.1% (mean value for the four gestures) when the sparsity level is set to be 48, while that value for OMP is 87.8% with the sparsity level set to be 8.
Comparison with the OMP Method
The recognition performance of the proposed approach is given in this subsection, in comparison with the OMP-based method [27]. To give a more intuitively illustration, the reconstructed spectrograms of the snapping fingers under different sparsity levels using the DGS-SP approach and the OMP approach are first shown in Figure 11 with the selected feature vectors' coordinates highlighted using white triangles in the spectrograms. Then, quantitative comparison via evaluating the recognition accuracy by using 40% and 80% of the data as the training datasets are given in Figure 12. It can be found that the proposed method outperforms the OMP-based method when choosing the sparsity level larger than about 25, meaning that the proposed method is more robust with respect to the sparsity levels. The proposed method yields the highest recognition accuracy of 91.1% (mean value for the four gestures) when the sparsity level is set to be 48, while that value for OMP is 87.8% with the sparsity level set to be 8. Clearly, the proposed approach has the advantages of better anti-noising ability and flexibility in adaptations of structures of spectrograms. It thus outperforms the traditional complete sparsity approach. The corresponding confusion matrix in this scenario for the DGS-SP approach is depicted in Table 1. Finally, the recognition performance of the proposed method, OMP-based method Clearly, the proposed approach has the advantages of better anti-noising ability and flexibility in adaptations of structures of spectrograms. It thus outperforms the traditional complete sparsity approach. The corresponding confusion matrix in this scenario for the DGS-SP approach is depicted in Table 1.
Comparison with the CNN Method
Finally, the recognition performance of the proposed method, OMP-based method and the convolutional neural network (CNN) method, another widely adopted approach [39] as illustrated in Figure 13, is analyzed with different sizes of training dataset. The size of the training data varies from 10% to 90% with a step of 10%. The sparsity level is set to be 48 for the proposed method, and the value for OMP is chosen as 8 (the best parameter for each one, as shown in Figure 12). The results are shown in Figure 14. Note that the recognition performance of the proposed method is better than that of OMP. Meanwhile, in the case of small dataset, the proposed method achieves higher recognition accuracy than the CNN method.
Conclusions
In this paper, we investigated and exploited four structured prior of the time-frequency distributions of dynamic hand gestures in order to enhance the hand motion features and further improve the recognition performance. A dynamic group sparsity model and DGS-Subspace Pursuit algorithm was utilized to model the spectrograms of the hand gestures. Such a modeling can well isolate the features of dynamic hand gestures and the noise components in the spectrograms. By experiment, we choose SVM as the classifier and sparsity level is set to be 48. Experimental result shows that the proposed method improves recognition accuracy about 3.3% over the OMP-based method and has a better performance than CNN based method in small dataset. The experiment demonstrated the effectiveness of the proposed method. The method proposed in this work is not only suitable for the recognition of simple hand gestures, but also for the recognition of more complex sign language gestures, which can enhance the feature extraction process. For future work, we would like to exploit dual-hands' gesture recognition and more subtle and complex sign language gesture using phased array radar or smart metasurface. And for the limitation of hardwares, we feel that the commercial millimeter wave radar still has many Note that the recognition performance of the proposed method is better than that of OMP. Meanwhile, in the case of small dataset, the proposed method achieves higher recognition accuracy than the CNN method.
Conclusions
In this paper, we investigated and exploited four structured prior of the time-frequency distributions of dynamic hand gestures in order to enhance the hand motion features and further improve the recognition performance. A dynamic group sparsity model and DGS-Subspace Pursuit algorithm was utilized to model the spectrograms of the hand gestures. Such a modeling can well isolate the features of dynamic hand gestures and the noise components in the spectrograms. By experiment, we choose SVM as the classifier and sparsity level is set to be 48. Experimental result shows that the proposed method improves recognition accuracy about 3.3% over the OMP-based method and has a better performance than CNN based method in small dataset. The experiment demonstrated the effectiveness of the proposed method. The method proposed in this work is not only suitable for the recognition of simple hand gestures, but also for the recognition of more complex sign language gestures, which can enhance the feature extraction process. For future work, we would like to exploit dual-hands' gesture recognition and more subtle and complex sign language gesture using phased array radar or smart metasurface. And for the limitation of hardwares, we feel that the commercial millimeter wave radar still has many places to improve, such as the operating distance improvement under the low power illumination, the antenna array configuration that enables user and environment sensitive and configurable beam radiation, etc. Note that the recognition performance of the proposed method is better than that of OMP. Meanwhile, in the case of small dataset, the proposed method achieves higher recognition accuracy than the CNN method.
Conclusions
In this paper, we investigated and exploited four structured prior of the time-frequency distributions of dynamic hand gestures in order to enhance the hand motion features and further improve the recognition performance. A dynamic group sparsity model and DGS-Subspace Pursuit algorithm was utilized to model the spectrograms of the hand gestures. Such a modeling can well isolate the features of dynamic hand gestures and the noise components in the spectrograms. By experiment, we choose SVM as the classifier and sparsity level is set to be 48. Experimental result shows that the proposed method improves recognition accuracy about 3.3% over the OMP-based method and has a better performance than CNN based method in small dataset. The experiment demonstrated the effectiveness of the proposed method. The method proposed in this work is not only suitable for the recognition of simple hand gestures, but also for the recognition of more complex sign language gestures, which can enhance the feature extraction process. For future work, we would like to exploit dual-hands' gesture recognition and more subtle and complex sign language gesture using phased array radar or smart metasurface. And for the limitation of hardwares, we feel that the commercial millimeter wave radar still has many places to improve, such as the operating distance improvement under the low power illumination, the antenna array configuration that enables user and environment sensitive and configurable beam radiation, etc. | 2022-11-09T16:10:39.275Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "08e06b52e96f371e4eb8a5d687c782803fbf3f0a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/21/8535/pdf?version=1667644493",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9fe8d8e75fb012eb289ab27f33a6746386e8663",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
158846653 | pes2o/s2orc | v3-fos-license | Inclination towards a research career among first year medical students: an international study
Introduction: The importance of fostering clinicians who are also scientists is well recognized. It is of value to assess medical students’ inclination towards and self-perceived readiness for a research career, as this has implications on the future development of such individuals. Methods: A questionnaire was self-administered to all consenting first year medical students from eleven universities in ten countries. Questions were asked pertaining to inclination towards research careers, confidence in research methodology and ability to understand medical literature. Results: A total of 1354 questionnaires were completed, with a mean response rate of 76.5%. While 24.8% students expressed an interest in pursuing a research career, 48.3% were undecided. Students with prior research experience and students who were attending graduate medical school programmes were more likely to have an interest in a research career after graduation. Males were more interested in learning about biostatistics than females, while the reverse was true for learning about research ethics. Discussion: Most students in their first year of medical school are not inclined towards a research career. This finding applies internationally, across different countries and medical school systems. Thus, the onus is on medical schools to help transform the perception and attitudes of their students during the course of their training, so that a greater proportion will be interested in and ultimately pursue research careers.
Introduction
The lack of physicians leading and being involved in research projects can be a barrier to the progress of biomedical research. The growing number of PhD full time researchers has been driven by the increasing technical knowledge necessary in many fields. However, to remain clinically relevant, it is of critical importance, for biomedical research more broadly, and clinical research in particular, to maintain direct connection with patients and the clinical milieu. This lack and declining numbers of physicians in clinical research, at least in a number of countries (Ley andRosenberg, 2005, Rosenberg, 1999), has led to the increasing emphasis in these and other countries on training 'physician scientists' or 'clinician scientists' (defined as medical graduates who pursue a career primarily in biomedical research) and 'clinician investigators' (medical graduates whose primary role is patient care, while also engaging in some research activity). In particular, various initiatives have been launched to develop such expertise, for example by the National Institutes of Health and other organisations in the United States (Ley and Rosenberg, 2005), by the National Medical Research Council in Singapore and by the National Commission for Scientific and Technological Research in Chile. However, while a key factor in the decision to pursue a career in research is dependent upon the individual's experience in medical school (Davis & Kelley, 1982), there is evidence that the number of medical students who intend to pursue research careers is declining (Neilson, 2003).
Various studies have been conducted to investigate the career choices of medical students (Al-Nuaimi et al., 2008, Burazeri et al., 2005, Huda and Yousuf, 2006, Koenig, 1992 and more specifically, the factors that influence the choice to pursue a career in research (Kassebaum et al., 1995). Among the key findings are that students who participated in research projects/electives and authored research papers while in medical school were more likely to take up research careers (Houlden et al., 2004, McManus et al., 1999, Nguyen-Van-Tam et al., 2001, Solomon et al., 2003. As might be expected, students who pursued a graduate research degree along with a medical degree, e.g. an MD/PhD, were more likely to pursue research careers (Andriole et al., 2008). Also, research training and experience while at medical school influenced both future career developement as well as research performance (Evered et al., 1987). The strong influence of completing a graduate research degree along with a medical degree in the choice of a career in research was also found in a comprehensive systematic review of career choices of medical students (Straus et al., 2006).
The overall objective of this study was to investigate students from different medical schools around the world as to their inclination towards, interest in, and perceived readiness for a research career. It was also of interest to assess the background and research knowledge of students when they first entered different medical schools, each with their own criteria for admission and system of teaching and training. Specifically, we were interested in: 1) assessing first year medical school students in their confidence in biomedical research methodology and the ability to interpret and understand biomedical literature and 2) describing their background and inclination towards research at the point of entry into medical school. We also wanted to explore differences in these specific domains between male and female students, and between students who reported having and not having conducted any research prior to medical school.
Methods
Medical schools from eleven universities in ten countries across five continents took part in this study through an international collegial network of medical school faculty members from different parts of the world, with very different educational and health care systems. The study team sought to include a variety of schools to provide as good a representation of medical students worldwide.
A self-administered, non-anonymous questionnaire was used to survey all first year medical students (who consented) at each of the collaborating schools. This was done before the students were exposed to any substantial teaching on research methods as part of their medical school programme. The questionnaire, in English, was developed by the authors after extensive, iterative discussions in person, via telephone and email. The questionnaire was pilot-tested on three medical students from one of the medical schools (Yong Loo Lin School of Medicine, Singapore) and refined based on their suggestions. It included questions on age, gender, highest qualification obtained prior to medical school, previous research experience (courses taken on research or research ethics, research conducted), peer-reviewed publication history, whether the quality of the research produced by a medical school influenced their choice of medical school, frequency of regular biomedical journal reading, inclination towards research during medical school and after graduation, main motivation for being inclined or disinclined, interest in pursuing biomedical research oriented Masters and PhD after graduation, role models for interest in research and percentage of time they envisioned spending on research after establishing their career as physicians. Their confidence, perceived ability and perceived learning need in the domains of research ethics, study design and biostatistics were also assessed.
The questionnaire was translated into Portuguese (in Portugal and Brazil; the versions in the two countries were essentially the same except for minor modifications to comply with spelling differences), into Spanish (in Chile) and into Hebrew (in Israel). The responsibility for translation was taken by the site investigator in each country along with (in the case of Portuguese) one other coinvestigator proficient in the language. In all the other countries the questionnaire was administered in English.
In addition, a 'site questionnaire', gathering background information on the particular medical school, including its teaching philosophy, curriculum and profile of the students enrolled, was completed by each site investigator.
Study implementation
Each site investigator was sent a soft-copy of the questionnaire along with a system for assigning a unique identification number to each questionnaire. Each site was responsible for ensuring that the study obtained the necessary Institutional Review Board/Ethics Committee approval as required by the regulations of the country and institution.
The exact procedure for administering the questionnaires varied somewhat depending on the set up and structure within each site. The typical approach involved the investigator seeking permission from colleagues involved in coordinating or conducting lectures to allow the students to be surveyed at an appropriate time-point. For example, in one of the medical schools, the questionnaire was completed during a session related to research/evidence based medicine and was used as a teaching tool and basis for class discussion (after the questionnaires had been completed). The informed consent process, and in particular the need for written informed consent, varied from institution to institution, but in all cases, complied with the policy of the particular medical school and country. Participation was entirely voluntary and those who did not wish to participate were free to decline. No financial or other incentives were provided. The students completed their questionnaires individually. Site investigators then sent the de-identified data to the study team in Singapore, either as hardcopies for data entry, or as softcopies after data entry into a standard excel template prepared by the study team. In one university (Federal University of Minas Gerais in Brazil) an electronic data capture system was used with students entering their responses directly into the system.
Statistical Analysis
Descriptive summary statistics were computed for all the universities combined, as well as for individual universities. Association of gender (male/female) and of research experience prior to medical school (yes /no) with influence of the quality of the research produced by a medical school, the choice of the medical school, inclination towards research during medical school and after graduation, interest in pursuing biomedical research oriented Masters and PhD after graduation, percentage of time they envisioned spending on research after establishing their career as physicians, and perceived learning need in the domains of research ethics and biostatistics was explored using Chi-square tests. Statistical analysis was carried out using SPSS Version 15.0 (SPSS Inc, Chicago USA, 2006). Participants with missing values for some of the questionnaire items were included in the analysis with no imputation procedure carried out.
Results
A total of 1354 first year medical students from 11 universities in 10 countries spanning five continents agreed to participate in the study ( Table 1). The first medical school administered the survey in October 2008, the last in November 2009. The mean response rate was 76.5% (range: 26.0% to 100.0%). The lowest response rate of 26% was from a site where, at the time of the survey, students were busy preparing for their examinations. Table 1 summarises the characteristics of each participating university and the students who took part in the study. Only two of the medical schools accepted students who already had a first degree (i.e. they were graduate programmes); the others accepted mostly undergraduates. The schools used different teaching methods and most provided research opportunities both within and outside the medical school curriculum. Of the 1354 students, 55.9% were female, the median age being 19 years (range: 16 to 49), 83.8% had never previously taken a course in research, 94.2% had never previously taken a course in research ethics and 69.7% had no previous research experience.
In terms of research interest and inclination, 24.8% of the students expressed interest in pursuing a career in research after graduation, while 48.3% were undecided (Table 2). Among those who did not already possess a research oriented Masters or PhD qualification, 34.5% indicated that they were keen to pursue such a programme after graduation from medical school while 47.3% were undecided. Also 74.5% indicated that the particular medical school's reputation for quality of research was an important factor in determining their choice of medical school. The two schools with graduate medical programmes had, not surprisingly, a higher proportion of students with previous experience of research and research training. This appeared to translate into greater inclination towards research careers (P = 0.004, results not shown in Tables). However, neither had the highest proportion of students who felt that the quality of research was important in their choice of medical school, which was the highest in Trisakti University. Table 3 presents the proportion of study participants who were confident in various aspects of biomedical research. For each of the named tasks, there was a remarkable balance between the students who indicated they were confident in carrying out the task and those who indicated they were not (i.e. many of the overall percentages were not too far from 50%). An exception was the question about 'applying ethical principles', to which 70.4% of all the students answered that they were confident in doing so. Also of interest (not shown in Tables) was that 56.6% of the students felt that it is "easy to manipulate statistics to support results desired by investigators" while 95.4% felt that "to be an intelligent reader of the biomedical literature, it is necessary to know something about statistics".
We also investigated whether there were gender differences in terms of inclination towards a research career and other related questions (Table 4). While females were more likely to report that the quality of research had been important in their choice of medical school, more males were interested in pursuing a research project outside of the official curriculum and in a research career after graduation (marginally not statistically significant). Males were more interested in learning about biostatistics alone than females, while the converse was true for learning about research ethics.
The results outlined in Table 5 suggest that students with prior research experience were more likely to have an interest in a research career after graduation and in spending more time on research. However, they were less likely to be interested in pursing a research oriented Master or PhD after graduation.
Discussion
There is an increasing emphasis in many countries on training 'clinician scientists' and 'clinician investigators'. A sufficient critical mass of such individuals is crucial for the success of any biomedical research programme. It is therefore essential to understand the background and inclination of medical students towards research careers. There are many factors that can influence the decision to pursue a research career. Among these are completing a graduate research degree along with a medical degree, having a desire to teach, the presence and influence of a role model, curiosity to discover the unknown, enjoyment of problem solving, having a high level of independence and a desire to help others indirectly through research (McGee & Keller, 2007, Reynolds, 2008, Straus et al., 2006.
We investigated the inclination of medical students from 11 universities in different parts of the world towards careers in biomedical research. We also assessed students' confidence in biomedical research methodology and their ability to interpret and understand medical literature.
Although such studies have been done before, this study is one of the few to investigate the inclination of medical students towards careers in research, across different medical schools worldwide. This allows for larger numbers (both in terms of the number of students and the number of medical schools involved) and also a better representation and diversity of students, from a range of different cultures and medical school systems. A study by Burzeri et al (2005) was also international in nature, and indeed had a larger sample size than our study. However, it was limited to just Southeast European countries and the questions asked were less extensive and comprehensive compared to ours.
Although there was some variation among the schools, it is of concern that in all schools, less than a third of the first year students expressed an interest in pursuing a research career, with the overall results of our study suggesting that only about a quarter of students in their first year of medical school are inclined towards a career in research.
The emphasis of the study was not primarily to compare and contrast the different schools, but to obtain an overall picture of students' perception of research and research careers. Hence, no formal statistical tests/comparisons were made to compare the results between different individual schools.
Nevertheless there were some interesting features in the results observed in particular schools. For example, Trisakti University, although not a traditionally research-intensive university in terms of research output, had the highest proportion of students who felt that quality of research was important in their choice of medical school. Also these students seemed generally more confident in the aspects of undertaking research (Table 3) compared particularly to the other undergraduate medical schools. The survey of these first year medical students represents the first phase of the overall project. It sets the foundation for the second phase, which is to follow up on the same students, at the end of their study in medical school, to assess any changes in inclination towards research careers and to see how this may be affected by the curriculum and format of teaching in each school. This is along the lines of the study by Gat and colleagues (2007), although their focus was specifically on residency in psychiatry.
As already indicated, one limitation of the study is the difficulty in comparing students from the different medical schools due to the large number of possible confounding factors, especially details about the different medical school curricula and learning 'milieus', most of which are not captured in the survey.
Other limitations include students who completed the questionnaires in languages other than English may have interpreted the questions slightly differently. Even for those who responded in English, there may be subtle differences between students who use English as their first language and students who use English as a foreign language. Likewise, there may be cultural differences that could affect their responses. However, it should be noted that we are studying perceptions, rather than actual knowledge. No formal sample size calculation was carried out as the intention was to try and recruit all first year medical students in each of the twelve medical schools involved. Likewise we had sought to include as many medical schools as we could in the collaboration, in order to maximise the number of students recruited.
Conclusion
Our study concluded that most students in their first year of medical school are not inclined towards a career in research. This finding applies internationally, across the different countries and medical school systems that were surveyed. Thus, the focus is on medical schools to help transform the perception and attitudes of their students during the course of their medical school training, so that a greater proportion will be interested in and ultimately choose research careers. This is particularly true for medical schools for which the training of graduates inclined towards careers in research is a priority. These schools may also want to refine their student selection or admission process to ensure that they accept students with a stronger aptitude and interest towards research careers.
Conflicts of interest
The author(s) declare that they have no conflicts of interest. | 2019-05-20T13:05:20.421Z | 2011-12-28T00:00:00.000 | {
"year": 2011,
"sha1": "b4e25b826e1a43f88565feb00b1101cf5eb9b177",
"oa_license": "CCBY",
"oa_url": "http://seajme.sljol.info/articles/10.4038/seajme.v5i2.197/galley/240/download/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2c5646c7ad7b9c07a5ef0e19c0820b84d970e774",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
211068870 | pes2o/s2orc | v3-fos-license | Tensor Categories arising from the Virasoro Algebra
We show that there is a braided tensor category structure on the category of $C_1$-cofinite modules for the (universal or simple) Virasoro vertex operator algebras of arbitrary central charge. In the generic case of central charge $c=13-6(t+t^{-1})$, with $t \notin \mathbb{Q}$, we prove semisimplicity, rigidity and non-degeneracy and also compute the fusion rules of this tensor category.
1. Introduction 1.1. Conformal-field-theoretic tensor categories. Tensor categories arising in conformal field theory have been studied since the late 1980s. Moore and Seiberg were the first to realize that an axiomatic formulation of rational two-dimensional conformal field theory essentially leads to what we now call a modular tensor category [MS]. A vertex operator algebra is a mathematically rigorous formulation of the chiral algebra of a two dimensional conformal field theory and, thus, it is a natural general expectation that sufficiently nice categories of modules of a given vertex operator algebra form rigid braided tensor categories.
In particular, Moore and Seiberg predicted [MS] that the category of integrable highest-weight modules for an affine Lie algebra at a fixed positive-integral level should have the structure of a rigid braided tensor category. Kazhdan and Lusztig were then the first to construct such a structure on a certain category of modules for affine Lie algebras [KL1]- [KL5]. However, the level in their work was not a positive integer, but was restricted so that adding the dual Coxeter number did not give a positive rational number. Then, several works, including those of Beilinson-Feigin-Mazur [BFM] and Huang-Lepowsky [HL6], constructed rigid braided tensor categories at positive-integral levels; these were in fact shown to be modular tensor categories.
The Huang-Lepowsky construction of these categories is based on the tensor category theory for general vertex operator algebras which they developed in [HL1]- [HL6] and [H1], in the rational case, and in [HLZ1]- [HLZ9], for the logarithmic case (with the latter work joint with Zhang). This theory turns out to be a powerful tool to construct tensor category structures and prove results in vertex operator algebras, topology, mathematical physics and related areas. Currently, however, almost all known examples of braided tensor category structures on categories of modules of a vertex operator algebra are for algebras satisfying Zhu's C 2 -cofiniteness condition.
The most obvious non-C 2 -cofinite examples are the Heisenberg vertex operator algebras. The category generated by their highest-weight modules, with real highest weights, was shown to be a We would like to thank Yi-Zhi Huang, Robert McRae and Antun Milas for many useful discussions. T. C. is supported by NSERC #RES0020460, C. J. is supported by NSFC grants 11771281 and 11531004, DR's research is supported by the Australian Research Council Discovery Project DP160101520. braided tensor category (in fact a vertex tensor category) in [CKLR]. The first substantial examples of non-rational and non-C 2 -cofinite vertex operator algebras are the affine vertex operator algebras at admissible levels where the category of ordinary modules has been shown to admit a braided tensor category structure [CHY]. Rigidity was also proven in the case that the Lie algebra is simply-laced [C1]. The non-generic but non-admissible-level case is the most challenging and here the tensor category structure is not yet understood (for recent progress, see [ACGY, CY]).
Here, we study the important family of vertex operator algebras coming from representations of the Virasoro algebra. For c ∈ C, denote by M(c, 0) and L(c, 0) the universal Virasoro vertex operator algebra and its unique simple quotient, respectively [FZ1]. The representation theories of both have been studied intensively in the mathematics and physics literature. In particular, the Virasoro minimal models correspond to L(c p,q , 0), where c p,q = 1 − 6 (p − q) 2 pq , for p, q ∈ Z ≥2 and gcd{p, q} = 1. (1.1) The vertex operator algebras L(c p,q , 0) are rational [W] and C 2 -cofinite [DLM]. Huang's general theorems [H2, H3] for rational and C 2 -cofinite vertex operator algebras then show that the modules of L(c p,q , 0) form a modular tensor category.
1.2. The problem. In this paper, we aim to construct braided tensor structures on certain categories of M(c, 0)-modules for arbitrary c ∈ C (note that M(c, 0) = L(c, 0) unless c = c p,q as in (1.1)). As M(c, 0) is neither rational nor C 2 -cofinite, constructing such tensor category structures for these types of vertex operator algebras is usually very hard due to the fact that they generally have infinitely many simple objects and may admit nontrivial extensions. Another difficult problem is to establish rigidity -even for semisimple categories this is usually very difficult.
Given c ∈ C, let H c denote the set of h ∈ C such that the Virasoro Verma module V (c, h) of central charge c and conformal weight h is reducible. Let L(c, h) denote the simple quotient of V (c, h). We study the category O fin c of finite-length M(c, 0)-modules whose composition factors are of the form L(c, h), with h ∈ H c . It turns out that the objects of this category are all C 1 -cofinite. It is actually very natural to consider the category C 1 of lower-bounded C 1 -cofinite modules, not only because the C 1 -cofiniteness condition is a very minor restriction and is relatively easy to verify for familiar families of vertex operator algebras, but also because the fusion product of two C 1 -cofinite modules is again C 1 -cofinite [Mi] (see also [N]). However, this category is not abelian in general because it need not be closed under taking submodules and contragredient duals. Furthermore, the associativity isomorphism that plays a key role in the tensor category theory of Huang, Lepowsky and Zhang is also hard to verify for C 1 .
In [H3], Huang studied the category of grading-restricted generalized modules for C 1 -cofinite vertex operator algebras (in the sense of Li [Li1]) and proved that they form a braided tensor category if the simple objects are R-graded, C 1 -cofinite and there exists a positive integer N such that the differences between the real parts of the highest conformal weights of the simple objects are bounded by N and the level-N Zhu algebra is finite-dimensional. In particular, if the vertex operator algebra is of positive-energy (that is, if V (n) = 0 for n < 0 and V (0) = C1) and has only finitely many simple modules (as is the case when it is C 2 -cofinite), then the category of gradingrestricted generalized modules has a braided tensor category structure. Unfortunately, Huang's conditions do not hold for the Virasoro vertex operator algebras, mainly because of the existence of infinitely many simple modules.
1.3. The main results. We first show that the category C 1 of lower-bounded C 1 -cofinite generalized M(c, 0)-modules is actually the same as the category O fin c of finite-length M(c, 0)-modules with composition factors L(c, h) for h ∈ H c . Since it is obvious that the category O fin c is closed under taking submodules and contragredient duals, this bypasses the difficulty of showing that C 1 is abelian directly. The identification of these two categories, together with the applicability result in [H4], also verifies the associativity isomorphism needed to invoke the logarithmic tensor category theory of Huang, Lepowsky and Zhang. As a result, the category O fin c has a braided tensor category structure.
We also prove rigidity and non-degeneracy for the category O fin c when c is generic, that is, when c = 13 − 6(t + t −1 ) with t / ∈ Q. A full subcategory O L c of this category has been studied in [FZ2] by I. Frenkel and M. Zhu. We first show that the category O fin c is semisimple, with infinitely many simple objects, and then compute the fusion rules of these simples. As a consequence of these computations and the coset realization of the Virasoro vertex operator algebra [GKO, ACL], we show that there is a braided tensor equivalence between the full subcategory O L c and a simple current twist of the category of ordinary modules for the affine vertex operator algebra V ℓ (sl 2 ) at a certain non-rational level ℓ depending on t. This shows that the objects in O L c are rigid. Since the category O fin c is generated by the simple objects in O L c and a similar full subcategory O R c (by the fusion rules of Theorem 5.2.4), the category O fin c is rigid. To summarize, the main theorem of the paper is as follows.
Main Theorem. Let O fin c denote the category of finite-length M(c, 0)-modules of central charge c = 13 − 6(t + t −1 ) whose composition factors are C 1 -cofinite. Then, (1) O fin c has a braided tensor category structure (Theorem 4.2.6).
1.4. Applications and future work. Rigid braided vertex tensor categories are relevant in various modern problems. For example, the quantum geometric Langlands program can be related to equivalences of tensor categories of modules of W -algebras and affine vertex algebras, see [AFO,Sec. 6] for example. In fact, our rigidity proof (Theorem 5.5.3) also proves one case of Conjecture 6.4 in [AFO] for G = SU(2). From a physics perspective, this relates to the S-duality and again vertex tensor categories are crucial as they describe categories of line defects ending on topological boundary conditions [CGai, FGai].
Vertex tensor categories also allow one to construct module categories of vertex algebras out of those of certain subalgebras [KO,HKL,CKM1]. Our results are very useful from this point of view for various important vertex algebras as they all contain the Virasoro vertex operator algebra as their subalgebra. For example, in the context of S-duality, vertex superalgebras that are extensions in a completion of O fin c appear in [CGL]. As an instructive example, the simple affine vertex operator superalgebra L k (osp(1|2)) at positive-integer levels [CFK] and, more generally at all admissible levels [CKLiR], has been well-understood by viewing it as an extension of the tensor product of L k (sl(2)) and a Virasoro minimal model. The results of our work allow us to extend these studies to generic levels.
More importantly, the vertex algebras with the best understood non-semisimple representation categories are associated to extensions of the Virasoro vertex operator algebra at central charge c 1,p . These are the triplet algebras W(p) [Ka,GKa2,AM1,NT,TW,FGST], the singlet algebras M(p) [A1,AM2,CM,RW1] and the logarithmic B(p) algebras [A2,CRW,ACKR]. Note that the B(p)-algebras are extensions of the Virasoro algebra times a rank one Heisenberg algebra and that B(2) is the rank one βγ vertex algebra whose even subalgebra is L −1/2 (sl(2)) [R].
An illustration of the usefulness of tensor categories is the recent work [ACGY]. There, uniqueness results of certain vertex operator algebra extensions of the Virasoro algebra at central charge c 1,p are derived [ACGY,Thm. 8 and Cor. 14]. These results could only be proven because of our Main Theorem. Moreover, the uniqueness statements were then employed to resolve affirmatively the conjectures (see [CRW,C2,ACKR]) that the B(p) algebra is a subregular W -algebra of type sl p−1 and also a chiral algebra of Argyres-Douglas theories of type (A 1 , A 2p−3 ).
Another application in this spirit is the study of the (conjectural) rigid braided tensor category structure for these extensions. While rigid vertex tensor category structures for the triplet algebras W(p) are known [AM1,TW], the existence of analogous structures are open for the singlet and B(p) families. When combined with [CKM1], our results are planned to be used to establish this existence and to study the vertex tensor category of those M(p)-modules (respectively, B(p)modules) that lie in the Ind-completion of the category of grading-restricted C 1 -cofinite modules of the Virasoro algebra (respectively, tensored with rank one Heisenberg algebra).
Preliminaries
2.1. Vertex operator algebras. Let (V, Y, 1, ω) be a vertex operator algebra. We first recall the definitions of various types of V -modules.
(1) A weak V -module is a vector space W equipped with a vertex operator map (2.1) satisfying the following axioms: (i) The lower truncation condition: for u, v ∈ V , Y W (u, x)v has only finitely many terms with negative powers in x.
(ii) The vacuum property: Y W (1, x) is the identity endomorphism 1 W of W .
(iii) The Jacobi identity: for u, v ∈ V , The Virasoro algebra relations: if we write Y W (ω, x) = n∈Z L n x −n−2 , then for any m, n ∈ Z, we have is a generalized eigenspace for the operator L 0 with eigenvalue n.
(3) A lower-bounded generalized V -module is a generalized V -module such that for any n ∈ C, W [n+m] = 0 for m ∈ Z sufficiently negative. (4) A grading-restricted generalized V -module is a lower-bounded generalized V -module such that dim W [n] < ∞ for any n ∈ C. (5) An ordinary V -module (sometimes just called a V -module for brevity) is a grading-restricted generalized V -module such that the W [n] in (2.5) are eigenspaces for the operator L 0 . (6) A generalized V -module W has length l if there exists a filtration W = W 1 ⊃ · · · ⊃ W l+1 = 0 of generalized V -submodules such that each W i /W i+1 is irreducible. A finite-length generalized V -module is one whose length is finite.
Definition 2.1.2. A vertex operator algebra is rational if every weak module is a direct sum of simple ordinary modules. We say that a category of ordinary V -modules is semisimple if every ordinary module is a direct sum of simple ordinary modules.
Let V be a vertex operator algebra and let (W, Y W ) be a lower-bounded generalized V -module, graded as in (2.5). Its contragredient module is then the vector space (2.6) equipped with the vertex operator map Y ′ defined by for any v ∈ V , is the opposite vertex operator (see [FHL]). We also use the standard notation for the formal completion of W with respect to the C-grading.
The notion of a logarithmic intertwining operator [M1,HLZ3] plays a key role in the study of the representations of vertex operator algebras that are not rational, such as the Virasoro vertex operator algebras discussed in this paper. Let W {x} denote the space of formal power series in arbitrary complex powers of x with coefficients in W .
(2.11) (ii) The Jacobi identity: for v ∈ V , w (1) ∈ W 1 and w (2) ∈ W 2 . (iii) The L −1 -derivative property: for any w (1) ∈ W 1 , logarithmic intertwining operators is called the fusion coefficient or fusion rule of the same type; it is denoted by We remark that the term "fusion rule" commonly has a different meaning in the literature, referring instead to an explicit identification of the isomorphism class of a "fusion product".
The following finiteness condition plays an important role in this paper: We shall also need the following facts.
2.2.
Representations of Virasoro algebra. The Virasoro algebra is the Lie algebra with the commutation relations where U(−) denotes a universal enveloping algebra and the L ≥0 -module structure of C1 c,h is given by L 0 1 c,h = h1 c,h , c1 c,h = c1 c,h and L n 1 c,h = 0 for n > 0. As usual, the Verma module V (c, h) has a unique (possibly trivial) maximal proper submodule. We denote its unique simple quotient by L(c, h).
When h = 0, L −1 1 c,0 is a singular vector in V (c, 0) (the tensor product symbol is omitted for brevity). It was shown in [FZ1] that admits the structure of a vertex operator algebra. It is called the universal Virasoro vertex operator algebra of central charge c. The simple quotient L(c, 0) of M(c, 0) therefore admits a vertex operator algebra structure as well. Note that all L(c, 0)-and M(c, 0)-modules are L-modules.
Another important result of [FZ1] is that every highest-weight L-module of central charge c is an M(c, 0)-module. We recall the existence criterion of Feigin and Fuchs for singular vectors in Verma L-modules. Useful expositions may be found in [IK, KRa]. Note that a Verma module is reducible if and only if it possesses a non-trivial singular vector (by which we mean one that is not proportional to the cyclic highest-weight vector). (2.20) (1) If there exist r, s ∈ Z ≥1 and t ∈ C \ {0} such that c and h satisfy (2.20), then there is a singular vector of weight h + rs in the Verma module V (c, h).
(2) Conversely, if V (c, h) possesses a non-trivial singular vector, then there exist r, s ∈ Z ≥1 and t ∈ C\{0} such that (2.20) holds. Of course, any singular vector in V (c, h) may be expressed as a linear combination of Poincaré-Birkhoff-Witt-ordered monomials in the L i , i < 0, acting on the highest-weight vector 1 c,h . A crucial fact for what follows is that the coefficient of L N −1 , is never 0 (irrespective of the chosen order). Here, N is the conformal weight of the singular vector minus that of 1 c,h .
where L −I = L −i 1 · · · L −in and the sum is over sequences I = {i 1 , . . . , i n } of ordered n-tuples i 1 ≥ · · · ≥ i n with |I| = i 1 + · · · + i n = N. Moreover, the coefficients a I (c, h) depend polynomially on c and h and the coefficient a {1,...,1} (c, h) of L N −1 may be chosen to be 1.
For brevity, we shall also denote the Verma module V (c(t), h r,s (t)) and its simple quotient by V r,s and L r,s , respectively.
The set H c of conformal weights is often referred to as the extended Kac table because the original Kac table of the Virasoro minimal models corresponds to the subset of h r,s (t) with t = p q (p, q ∈ Z ≥2 and gcd{p, q} = 1), r = 1, . . . , p − 1 and s = 1, . . . , q − 1. This subset consists precisely of the conformal weights of the simple L(c(t), 0)-modules [W,RW2].
The embedding structure of Virasoro Verma modules is also due to Feigin and Fuchs. A convenient summary appears in [IK,Ch. 5].
Theorem 2.2.4 ( [FF]). The embedding structures of the reducible Verma L-modules V r,s , where r, s ∈ Z ≥1 , are as follows: (1) If t = q p , for p, q ∈ Z ≥1 and gcd{p, q} = 1, then there are two possible "shapes" for the embedding diagrams (see [IK] for further details): If r is a multiple of p or s is a multiple of q, then one has the embedding chain (2.23) Otherwise, the embedding diagram is as follows: (2.24) (2) If t = − q p for p, q ∈ Z ≥1 and gcd{p, q} = 1, then there are again two possible "shapes", similar to those in (1) except that the diagrams are now finite (again the details may be found in [IK]): If r is a multiple of p or s is a multiple of q, then one has the embedding chain (2.25) Otherwise, the embedding diagram is as follows. (2.26) Corollary 2.2.5. It follows from these embedding structures that: (1) Every non-zero submodule of a Verma L-module is either a Verma module itself or the sum of two Verma modules.
In light of Corollary 2.2.5(4) and the fact that the L(c p,q , 0)-modules (p, q ∈ Z ≥2 and gcd{p, q} = 1) form a modular tensor category [H2, H3], we shall restrict our considerations in what follows to modules of the universal Virasoro vertex operator algebras M(c, 0) (for arbitrary c ∈ C).
Remark 2.2.6. The representations of M(c, 0) have been investigated in both the mathematics and physics literature. We list some relevant work here.
(1) The representation theory of M(c, 0) has been explored in detail by physicists under the moniker "logarithmic minimal models" [PRZ, RS, MRR]. In particular, fusion products were studied [GKa1,EF,MR] in order to determine the types of non-semisimple modules that appeared at central charges of the form c p,q , p, q ∈ Z ≥1 , especially the so-called staggered modules [Ro, KyR] that arise in logarithmic conformal field theories [RoS,G,F2,Ga,CR].
(2) The famous triplet algebras W(p) [Ka,F1,GKa2,AM1,NT,TW,FGST] and singlet algebras M(p) [A1,AM2,CM,RW1] have been extensively studied as extensions of M(c 1,p , 0). The B(p)-algebras [A2,CRW,ACKR] are similarly extensions of M(c 1,p , 0) times a rank one Heisenberg vertex operator algebra. (3) The representations and fusion rules of the Virasoro algebra of central charge c 1,1 = 1 have been studied in [M2]. In [Mc], these fusion rules were used to prove that the semisimple full subcategory of M(1, 0)-modules generated by the L(1, n 2 4 ), n ∈ Z ≥0 , is braided-tensor and tensor equivalent to a modification of the category of finite-dimensional modules of sl 2 involving 3-cocycle on Z/2Z. (4) For the case t / ∈ Q, the category of modules generated by the V 1,s with s ∈ Z ≥1 was studied in [FZ2] and the fusion rules of this category were determined. These rules will be crucial to our results, see Section 5.2 below.
The vertex operator algebra M(c, 0) is neither rational nor C 2 -cofinite because it has infinitely many inequivalent simple modules. It is therefore natural to study the C 1 -cofiniteness of an M(c, 0)-module W . One can check that (2.27) hence the C 1 -quotient of a highest-weight M(c, 0)-module is spanned by the powers of L −1 acting on the cyclic highest-weight vector. We therefore have the following consequences of Proposition 2.2.2 and Corollary 2.2.5.
Equivalence of two categories
Recall that C 1 denotes the category of lower-bounded C 1 -cofinite generalized M(c, 0)-modules and that O fin c denotes the category of finite-length M(c, 0)-modules with composition factors L(c, h) for h ∈ H c . In this section, we will prove that these two categories are the same.
From Lemma 2.1.5(2) and Corollary 2.2.7, we have O fin c ⊆ C 1 . We therefore only need to prove the reverse inclusion in what follows.
Lemma 3.1.1. Let V be a vertex operator algebra and let W be a generalized V -module.
(1) Suppose that 0 = w ∈ W has L 0 -eigenvalue h ∈ C and that h − n is not an eigenvalue of L 0 for any n ∈ Z ≥1 . Then, w / Proof. If w ∈ C 1 (W ), then there would exist homogeneous elements u i ∈ M(c, 0), with wt u i ∈ Z ≥1 , and w i ∈ W such that w = i u i −1 w i . As w = 0, there exists at least one non-zero term u k −1 w k in this decomposition. But, wt w k = wt w − wt u k ∈ h − Z ≥1 , a contradiction. This proves (1).
For (2), assume that W = 0. As W is lower-bounded, there exists an L 0 -eigenvalue h ∈ C such that h − n is not an eigenvalue of L 0 for any n ∈ Z ≥1 . If w is the corresponding eigenvector, then w ∈ W = C 1 (W ), contradicting (1).
Proof. Since W is lower-bounded, we can choose an L 0 -eigenvector w 1 ∈ W whose eigenvalue h is such that there are no L 0 -eigenvalues of the form h − n, n ∈ Z ≥1 . It follows that w 1 is a highest-weight vector; moreover, w 1 / ∈ C 1 (W ) by Lemma 3.1.1(1). Denote by W 1 the highestweight submodule of W generated by w 1 . From Lemma 2.1.5(1), W/W 1 is also a lower-bounded C 1 -cofinite generalized module. Moreover, w 1 / ∈ C 1 (W ) gives C 1 (W ) ⊂ C 1 (W ) + W 1 and hence The same argument now shows that W/W 1 has a highest-weight vector where w 2 ∈ W is any element whose image in W/W 1 is w 2 . Now, w 2 generates a highest-weight by (3.3). In other words, we have obtained a sequence of epimorphisms Continuing in this manner, we obtain sequences in which each W/W i is lower-bounded, C 1 -cofinite and generalized. Moreover, the dimensions of the C 1 -quotients of the W/W i are strictly decreasing as i increases. As dim W/C 1 (W ) is finite, there exists n such that this dimension is 0. By Lemma 3.1.1, we therefore have W/W n = 0. Thus, we have obtained a sequence of submodules Proposition 3.1.2 shows that an arbitrary W ∈ C 1 is composed of finitely many highest-weight modules, but this does not guarantee that W has finite length because the length of one of the highest-weight modules might be infinite. We therefore need the following stronger result whose proof is deferred until the following section.
Given this proposition, the main theorem of this section, which plays the key role in the construction of the tensor category structure on O fin c , is easily proven.
Proof. As remarked above, we only need to prove that C 1 ⊆ O fin c . Given W ∈ C 1 , the highestweight modules of Proposition 3.1.2 are C 1 -cofinite, by Proposition 3.1.3, so they are finite-length, by Corollary 2.2.7(2). It follows that W is also finite-length. Moreover, the composition factors of the highest-weight modules will all have the form L(c, h), with h ∈ H c , by Corollary 2.2.7(3), hence so will those of W . We conclude that W ∈ O fin c , completing the proof. It only remains to prove Proposition 3.1.3.
3.2. Proof of Proposition 3.1.3. The hard work needed for this proof is isolated below as Lemma 3.2.1. For this, we consider L-modules W ⊆ W for which W/ W is highest-weight. We prepare some convenient notation for what follows.
Let w ∈ W/ W be the cyclic highest-weight vector and let w ∈ W be any element whose image in as vector spaces, hence h). Thus, there exists w ∈ W such that Uw = 0 and so Uw ∈ W . Moreover, we have For convenience, we shall normalise U so that the coefficient of Consequently, every element of the form U N Uw ∈ U(L <0 )Uw = U(L <0 )w ∩ W is guaranteed to be in C 1 ( W ) if the weight of U N ∈ U(L <0 ) is sufficiently large, as desired.
It remains to describe the modifications needed when W/ W ∼ = V (c, h)/ V (c, h 1 ) + V (c, h 2 ) . First, the role of U is now played by two elements U i = L h i −h −1 + · · · ∈ U(L <0 ), i = 1, 2, so that The element w ′ ∈ C 1 ( W ), defined by (3.10), therefore has the form U The next step is slightly different because comparing with (3.10) leads to where a 1 , a 2 ∈ C satisfy a 1 + a 2 = 1. Thus, we can only conclude that for all sufficiently large M. However, the embedding diagrams of Theorem 2.2.4 show that when a submodule of V (c, h) is not Verma, then the two generating singular vectors (here of weights h 1 and h 2 ) have a common descendant singular vector (of weight h 3 say). Since Verma modules are free as U(L <0 )-modules, this means that there exist T 1 , T 2 ∈ U(L <0 ) such that T 1 U 1 = T 2 U 2 . Moreover, Proposition 2.2.2 gives T i = L h 3 −h i −1 + · · · as usual. Assuming that M is taken sufficiently large, it now follows that a 2 L h 3 −h 2 −1 U 2 w may be replaced in (3.13) by a 2 T 2 U 2 w = a 2 T 1 U 1 w, modulo terms in C 1 ( W ). In other words, we arrive at L N −1 U 1 w ∈ C 1 ( W ) for all N sufficiently large and, by swapping the indices 1 and 2 in this argument, also L N −1 U 2 w ∈ C 1 ( W ) for all N sufficiently large. By virtue of (3.12), the proof is complete.
We can now prove Proposition 3.1.3. In the filtration (3.1), W n = W ∈ C 1 , so it will suffice to show that W i ∈ C 1 implies that W i−1 ∈ C 1 . We therefore assume that W i ∈ C 1 . As above, let w j ∈ W j /W j−1 be the cyclic highest-weight vector, for each 1 ≤ j ≤ n, and choose w j ∈ W j so that its image in W j /W j−1 is w j .
As w j ∈ W i for each j < i, we have L N j −1 w j ∈ C 1 (W i ) for all sufficiently large N j . By (3.8b), we may therefore write L where U (n) j ∈ U(L <0 ) and w ′ j ∈ C 1 (W i−1 ). The first term on the right-hand side is clearly in U(L <0 )w i . However, it is also in W i−1 because the second term is, as is the left-hand side (because j < i). By Lemma 3.2.1 (with W = W i , W = W i−1 and w = w i ), this first term therefore belongs to C 1 (W i−1 ) for sufficiently large N j . But, the second term does too, hence we have for all j < i and sufficiently large N j . Iterating (3.8), with W = W i−1 , down the filtration (3.1) now gives It follows that W i−1 /C 1 (W i−1 ) is spanned by the (images of the) L m −1 w j , with j < i and m ∈ Z ≥0 . By (3.15), we have dim W i−1 /C 1 (W i−1 ) < ∞ and the proof is complete.
Tensor categories associated to the Virasoro algebra
Recall that O fin c denotes the category of finite length M(c, 0)-modules with composition factors L(c, h) for h ∈ H c and note that O fin c is closed under taking direct sums, generalized submodules, quotient generalized modules and contragredient duals. In this section, we will construct a tensor category structure on O fin c by verifying that all of the conditions needed in the Huang-Lepowsky-Zhang logarithmic tensor theory in [HLZ1]- [HLZ9] hold for O fin c . For convenience, we first recall the general constructions and main results in [HLZ1]- [HLZ9]. 4.1. P (z)-tensor product. In the tensor category theory for vertex operator algebras, the tensor product bifunctors are not built on the classical tensor product bifunctor for vector spaces. Instead, the central concept underlying the constructions is the notion of P (z)-tensor product [HL3,HLZ4,HLZ5], where z is a nonzero complex number and P (z) is the Riemann sphere C with one negatively oriented puncture at ∞ and two ordered positively oriented punctures at z and 0, with local coordinates 1/w, w − z and w, respectively. We refer to [KaR] for an expository account that motivates the definition of this tensor product, also known as the fusion product.
Definition 4.1.1. Let W 1 , W 2 and W 3 be generalized modules for a vertex operator algebra V . A P (z)-intertwining map of type W 3 W 1 W 2 is a linear map I : W 1 ⊗ W 2 −→ W 3 , (4.1) satisfying the following conditions: (i) The lower truncation condition. For any element w (1) ∈ W 1 , w (2) ∈ W 2 and n ∈ C, π n−m (I(w (1) ⊗ w (2) )) = 0 for m ∈ Z ≥0 sufficiently large, (4.2) where π n is the canonical projection of W to the weight subspace W (n) (ii) The Jacobi identity. For v ∈ V , w (1) ∈ W 1 and w (2) ∈ W 2 , Remark 4.1.2. The vector space of P (z)-intertwining maps of type W 3 W 1 W 2 is isomorphic to the space of logarithmic intertwining operators of the same type [HLZ5,Prop. 4.8].
Definition 4.1.3. Let W 1 and W 2 be generalized V -modules. A P (z)-product of W 1 and W 2 is a generalized V -module (W 3 , Y 3 ) together with a P (z)-intertwining map I 3 of type W 3 W 1 W 2 . We denote it by (W 3 , Y 3 ; I 3 ) or simply by (W 3 , I 3 ). Let (W 4 , Y 4 ; I 4 ) be another P (z)-product of W 1 and W 2 . A morphism from (W 3 , Y 3 ; I 3 ) to (W 4 , Y 4 ; I 4 ) is a module map η from W 3 to W 4 such that whereη is the natural map from W 3 to W 4 which extends η.
We recall the definition of a P (z)-tensor product for a category C of V -modules. The notion of a P (z)-tensor product of W 1 and W 2 in C is defined in terms of a universal property as follows.
Definition 4.1.4. For W 1 , W 2 ∈ C, a P (z)-tensor product of W 1 and W 2 in C is a P (z)-product (W 0 , Y 0 ; I 0 ) with W 0 ∈ C such that for any P (z)-product (W, Y ; I) with W ∈ C, there is a unique morphism from (W 0 , Y 0 ; I 0 ) to (W, Y ; I). Clearly, a P (z)-tensor product of W 1 and W 2 in C, if it exists, is unique up to isomorphism. We denote the P (z)-tensor product (W 0 , Y 0 ; I 0 ) by (W 1 ⊠ P (z) W 2 , Y P (z) ; ⊠ P (z) ) (4.5) and call the object (W 1 ⊠ P (z) W 2 , Y P (z) ) (4.6) the P (z)-tensor product of W 1 and W 2 .
We now recall the construction of the P (z)-tensor product in [HLZ5]. Let v ∈ V and let (4.7) Denote by τ P (z) the action of on the vector space (W 1 ⊗ W 2 ) * , where ι + is the operation of expanding a rational function in the formal variable t in the direction of positive powers of t, given by (4.10) Then, we have the operators L ′ P (z) (n) for n ∈ Z defined by (4.11) Given two V -modules W 1 and W 2 , let W 1 P (z) W 2 be the vector space consisting of all the elements λ ∈ (W 1 ⊗ W 2 ) * satisfying the following two conditions.
(1) P (z)-compatibility condition: (a) Lower truncation condition: For all v ∈ V , the formal Laurent series Y ′ P (z) (v, x)λ involves only finitely many negative powers of x. (b) The following formula holds: (4.12) (2) P (z)-local grading restriction condition: (a) Grading condition: λ is a (finite) sum of generalized eigenvectors of (W 1 ⊗ W 2 ) * for the operator L ′ P (z) (0). (b) The smallest subspace W λ of (W 1 ⊗ W 2 ) * containing λ and stable under the component operators τ P (z) (v ⊗ t n ) of the operators Y ′ P (z) (v, x), for v ∈ V and n ∈ Z, satisfies dim (W λ ) [n] < ∞ and (W λ ) [n+k] = 0 for k ∈ Z sufficiently negative and any n ∈ C. Here, the subscripts denote the C-grading given by the L ′ P (z) (0)-eigenvalues.
Theorem 4.1.5 ( [HLZ5]). The vector space W 1 P (z) W 2 is closed under the action Y ′ P (z) of V and the Jacobi identity holds on W 1 P (z) W 2 . Furthermore, the P (z)-tensor product of W 1 , W 2 ∈ C exists if and only if W 1 P (z) W 2 , equipped with Y ′ P (z) , is an object of C. In this case, the P (z)-tensor product is the contragredient of (W 1 P (z) W 2 , Y ′ P (z) ).
To construct a tensor category structure on O fin c , we first need to show that O fin c is closed under P (z)-tensor products. This is an immediate corollary of the following result of Miyamoto.
Theorem 4.1.6 ( [Mi]). Let W 1 , W 2 ∈ C 1 . If W 3 is a lower-bounded generalized module such that there exists a surjective intertwining operator of type W 3 W 1 W 2 , then W 3 is also an object of C 1 . In particular, the P (z)-tensor product W 1 ⊠ P (z) W 2 is an object in C 1 .
Corollary 4.1.7. The category O fin c is closed under taking P (z)-tensor products. Namely, if Proof. By Theorem 3.1.4, W 1 and W 2 ∈ O fin c are lower-bounded and C 1 -cofinite, so using Theorem 4.1.6 we have that the P (z)-tensor product of W 1 and W 2 is also lower-bounded and C 1 -cofinite. By Theorem 3.1.4 again, we have that W 1 ⊠ P (z) W 2 ∈ O fin c .
Associativity isomorphism.
The associativity isomorphism is the most important ingredient of the tensor category theory of Huang-Lepowsky-Zhang. To prove it, one needs the following convergence and extension property introduced in [HLZ8].
Theorem 4.2.3 ( [HLZ7,HLZ8,H4]). Let V be a vertex operator algebra satisfying the following conditions: (1) For any two modules W 1 and W 2 in C and any z ∈ C × , if the generalized V -module W λ is generated by a generalized L ′ P (z) (0)-eigenvector λ ∈ (W 1 ⊗W 2 ) * satisfying the P (z)-compatibility condition is lower-bounded, then W λ is an object of C.
(2) The convergence and extension property holds for either the product or the iterates of intertwining operators for V .
Condition
(2) in Theorem 4.2.3 is guaranteed by the C 1 -cofiniteness condition.
Proposition 4.2.4. The convergence and extension property for products and iterates holds for O fin c .
Proof. It follows from [HLZ8,Thm. 11.8] that if all the objects W ∈ O fin c are C 1 -cofinite and satisfy dim ℜ(n)<r W [n] < ∞, for any r ∈ R, then the convergence and extension properties for products and iterates of intertwining operators hold. The C 1 -cofiniteness is guaranteed by Theorem 3.1.4 while the second condition is obvious because objects in O fin c have finite lengths.
Theorem 4.2.6. The category O fin c has a braided tensor category structure.
There are many open conjectures about the representation categories of the singlet and triplet algebras [CGan,CM,CMR,CGR,RW1], but the most basic one is the existence of a rigid vertex tensor category structure on the category of finite-length modules [CMR]. Since the singlet (and triplet) vertex operator algebras are objects in the Ind-completion of O fin c for c = 1 − 6(p − 1) 2 /p, so t = 1/p, our results may be used to study the vertex tensor category of modules containing all known indecomposable but reducible modules for the singlet algebra.
Rigidity for generic central charge
In this section, we will study the category O fin c for generic central charges c = 13 − 6t − 6t −1 , meaning that t / ∈ Q, and prove that it is rigid. The simple objects of this category have the form L(c, h r,s ) for r, s ∈ Z ≥1 . Set t = k + 2, so that k / ∈ Q, and recall the notation L r,s = L(c, h r,s ) for r, s ∈ Z ≥1 from Section 2.2. The condition that c is generic is understood to be in force for the rest of the section unless otherwise noted. 5.1. Semisimplicity. We start by establishing that O fin c is semisimple for generic central charges. If ℑ(h) = ℑ(h ′ ), then this sequence obviously splits, so we may assume that ℑ(h) = ℑ(h ′ ). If ℜ(h ′ ) < ℜ(h), then there is a highest-weight vector of conformal weight h ′ in M. From Theorem 2.2.4, the singular vector v h ′ either generates a Verma module V (c, h ′ ) or its simple quotient L(c, h ′ ). But, it has to be L(c, h ′ ) because V (c, h ′ ) is not in O fin c . The exact sequence (5.2) therefore splits.
If ℜ(h ′ ) > ℜ(h), we consider the contragredient dual of M, recalling that L(c, h) and L(c, h ′ ) are self-dual, arriving at the short exact sequence The previous argument then gives Theorem 5.1.2. The category O fin c is semisimple for generic central charges.
Proof. Lemma 5.1.1 shows that there are no extensions between non-isomorphic simple objects. But, [Ro,Thm. 6.4] and [KyR,Prop. 7.5] (see also [GK,Lem. 5.2.2] for a high-powered approach) have shown that there are no self-extensions of L(c, h) for generic central charges.
Despite claims to the contrary in the literature, it is interesting that certain simples do admit self-extensions for non-generic central charges. The L(c, h) that do are classified in [KyR].
5.2. Fusion rules. We next determine the fusion rules of the category O fin c , for generic c, using the Zhu algebra tools developed in [FZ1,FZ2], see also [Li2]. From [FZ1,W], we know that the Zhu [DMZ,Li2].
The fusion rules involving the simple M(c, 0)-modules L 1,s were computed by I. Frenkel and M. Zhu in [FZ2,Prop. 2.24] using the explicit singular vector formula of Benoit and Saint-Aubin [BSA]. Combined with our Theorems 4.2.6 and 5.1.2, their result may be phrased as follows.
Theorem 5.2.1 ( [FZ2]). Let c be generic. Then, for s 1 , s 2 , s 3 ∈ Z ≥1 , we have The obvious analogue for L r 1 ,1 ⊠ L r 2 ,1 also holds. Our interest, however, is in the fusion of L r,1 with L 1,s . This can also be attacked using the same methods, in particular the results [FZ2,Lems. 2.9 and 2.14] which we quote for convenience.
(1) As an A(M(c, 0))-bimodule, we have where f r,s (x, y) is the image of the non-cyclic singular vector of V r,s in A(V r,s ).
(2) Let n denote the residue of n ∈ Z modulo 2. Then, for r, s ∈ Z ≥1 , we have f r,1 = g ′ r g ′ r−2 . . . g ′ r and f 1,s = g s g s−2 . . . g s , (5.6) where g 1 (x, y) = g ′ 1 (x, y) = x − y and, for r, s ∈ Z ≥2 , Specialising [FZ2,Lem. 2.22] to the case we are interested in, we arrive at the following characterisation of the fusion rules.
Fix now r, s ∈ Z ≥1 and consider first the h r ′ ,s ′ that solve (5.8a). Direct calculation shows that the solutions of g ′ r (x, h 1,s ) = 0 are x = h r,s and, if r > 1, x = h −r+2,s . An easy calculation verifies that h r,s = h r ′ ,s ′ ⇐⇒ (r, s) = (r ′ , s ′ ) or (−r ′ , −s ′ ), (5.9) since t / ∈ Q. It follows that h −r+2,s / ∈ H c , for r > 1, and so these solutions of (5.8a) correspond to simple Verma modules V −r+2,s = L −r+2,s / ∈ O fin c and hence cannot appear in the fusion of L(r, 1) and L(1, s), by Theorem 4.2.6. The viable solutions are therefore h r ′ ,s ′ = h r,s , h r−2,s , . . . , h r,s . However, the same analysis gives the viable solutions of (5.8b) as h r ′ ,s ′ = h r,s , h r,s−2 , . . . , h r,s . Appealing to (5.9), we conclude that there is a unique solution to Equations (5.8a) and (5.8b): h r ′ ,s ′ = h r,s . We can now state our main fusion rule.
Theorem 5.2.4. Let c be generic. Then, for r, s ∈ Z ≥1 , we have L r,1 ⊠ L 1,s = L r,s . (5.10) Proof. Since O fin c is tensor (Theorem 4.2.6) and semisimple (Theorem 5.1.2), L r,1 ⊠L 1,s decomposes as a finite direct sum of simples. The previous arguments establish that the only possibilities for this decomposition are 0 or L r,s , with the deciding condition whether (5.8c) is satisfied (for r ′ = r and s ′ = s). Unfortunately, computing the left-hand side of (5.8c) directly is difficult.
We remark that this theorem proves, for generic c, a well-known conjecture of physicists. Of course, physicists are more interested in the non-generic version of this conjecture [MRR,Eq. (4.34)].
5.3. Coset realizations of the Virasoro algebra. The next ingredient for our proof of the rigidity of O fin c is the well-known coset realization of the Virasoro vertex operator algebra [GKO, ACL]. We review this here for general simply laced Lie algebras before specializing to sl 2 .
Let P + be the set of dominant integral weights of a simple complex Lie algebra g and let Q be its root lattice. Let g = g[t, t −1 ] ⊕ CK be the affinization of g. For ℓ ∈ C and λ ∈ P + , let V ℓ (λ) = U( g) ⊗ U (g[t]⊕CK) E λ , where E λ is the finite-dimensional g-module of highest weight λ regarded as a g[t] ⊕ CK-module on which tg[t] acts trivially and K acts as multiplication by the level ℓ. Denote by L ℓ (λ) the simple quotient of V ℓ (λ). The modules V ℓ (0) and L ℓ (0) admit, for ℓ = −h ∨ , a vertex operator algebra structure [FZ1]. To emphasize the dependence of these vertex operator algebras on g, we denote them by V ℓ (g) and L ℓ (g).
It is known that the category KL k of ordinary L k (g)-modules has a rigid braided tensor category structure for all k such that k + h ∨ / ∈ Q + , k ∈ Z ≥0 , or k is admissible [KL1]- [KL5], [HL5,CHY]. When k ∈ Z ≥0 , the simple objects of KL k are rather the L k (λ) with λ ∈ P k + = {λ ∈ P + | (λ, θ) ≤ k}, where θ is the longest root of g. Here, we study the generic case k / ∈ Q, for which V k (g) = L k (g) and the simple objects of KL k are the L k (λ) with λ ∈ P + . Let W k (g) be the W -algebra associated with g and a principal nilpotent element of g at level k. Let W k (g) be the unique simple quotient of W k (g). We denote by χ λ the central character associated to the weight λ ∈ P + and let M k (χ λ ) be the Verma module of W k (g) with highest weight χ λ and L k (χ λ ) its unique simple (graded) quotient.
Definition 5.4.1. An associative algebra in C is a triple (A, µ A , ι A ) with A ∈ C and µ A : A⊠A → A, ι A : 1 → A morphisms in C satisfying the following axioms.
(1) Unit: Definition 5.4.2. For an associative algebra A in C, define C A to be the category of pairs (X, µ X ) for which X ∈ C and µ X ∈ Hom C (A ⊗ X, X) satisfy the following.
(1) Unit: A to be the full subcategory of C A consisting of local modules: those objects (X, µ X ) such that µ X • c X,A • c A,X = µ X .
When A is commutative, the category C A is naturally a tensor category with tensor product ⊠ A and unit object A, while the subcategory C 0 A is braided tensor (see for example [KO] -the case in which A is an object of the direct sum completion of C, that is a countably infinite direct sum of objects, is addressed in [AR,CGR,CKM2]).
Let V be a vertex operator algebra and C be a category of V -modules with a natural tensor category structure. The following theorem allows one to study vertex operator algebra extensions using abstract tensor category theory.
Theorem 5.4.3 ( [HKL,CKM1]). A vertex operator algebra extension V ⊂ A in C is equivalent to a commutative associative algebra in the braided tensor category C with trivial twist and injective unit. Moreover, the category of modules in C for the extended vertex operator algebra A is braided tensor equivalent to the category of local C-algebra modules C 0 A via the induction functor F A .
We recall that the twist θ X of an object X in a category of modules over a vertex operator algebra is given by the action of e 2πiL 0 . To say that A has trivial twist above therefore means that L 0 acts semisimply on A with integer conformal weights. 5.5. Rigidity. It is time to put all these ingredients together to prove the rigidity of O fin c for all generic central charges. We first use Theorem 5.4.3 to prove a braided tensor equivalence between the full subcategory O L c ⊂ O fin c , generated by the simple Virasoro modules L µ,1 with µ ∈ Z ≥1 , and a simple current twist of the category KL ℓ of ordinary L ℓ (sl 2 )-modules. Here, ℓ is related to k by (5.13), hence to t = k + 2 and the (generic) central charge c = 13 − 6t − 6t −1 . Let Then, A is a commutative associative algebra object in the direct sum completion of KL ℓ+1 ⊠ O fin c . Denote by F A : KL ℓ+1 ⊠ O fin c → KL ℓ+1 ⊠ O fin c A the induction functor.
Lemma 5.5.1. The restriction of the induction functor F A to the full subcategory O L c is fully faithful.
Restricting F A to O L c , its image is the full subcategory of KL 1 ⊠ KL ℓ whose simple objects have the form L 1 (µ) ⊗ V ℓ (µ). Denote this category by (KL 1 ⊠ KL ℓ ) 0 . This category is a rigid semisimple tensor category with the same simple objects and fusion rules as the category of finite dimensional sl 2 -modules ([KL1]- [KL5], [L]). This proves the following proposition. We now come to the main theorem of this section.
Theorem 5.5.3. The category O fin c is rigid for generic central charges.
Proof. As O L c is rigid, the simple objects L µ,1 , with µ ∈ Z ≥1 are rigid. An identical argument shows that the L 1,ν with ν ∈ Z ≥1 are likewise rigid. It follows now from Theorem 5.2.4 that L µ,ν ∼ = L µ,1 ⊠ L 1,ν is rigid. Since O fin c is semisimple (Theorem 5.1.2), this completes the proof.
Remark 5.5.4. Braided tensor equivalences between categories of modules for W -algebras and affine vertex operator algebras may be viewed as reformulations of problems in the quantum geometric Langlands program. For example, Proposition 5.5.2 is the case g = sl 2 , N = 1 and β generic of [AFO,Conj. 6.4], up to a simple current twist (see also [C1,Rem. 7.2]).
We finish by demonstrating that O fin c is moreover non-degenerate. Recall that in a rigid braided tensor category C, an object T ∈ C is called transparent if it has trivial monodromy with every other object of C, that is if c X,T • c T,X = id T ⊠X for all X ∈ C. C is said to be non-degenerate if the only transparent objects are finite direct sums of the tensor unit. In our case, non-degeneracy follows easily from the balancing axiom where we recall that θ X denotes the twist of X ∈ C.
Proposition 5.5.5. The category O fin c is non-degenerate for generic central charges. | 2020-02-11T02:00:33.654Z | 2020-02-08T00:00:00.000 | {
"year": 2020,
"sha1": "1e37ac06e2be9fa0cd84d034596a819ab1d6d768",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.03180",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1e37ac06e2be9fa0cd84d034596a819ab1d6d768",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
220376957 | pes2o/s2orc | v3-fos-license | A Clinical Survey Regarding Decision-Making for the Choice of Restorative Material in Endodontically Treated Teeth among Dentists
Purpose: The objective of the study is to find out a more suitable restorative material for EET. To provide awareness among dentist regarding choice of restoring material and find out reason for the failure of restoration. Materials and Methods: The methodology of this study is a one page questionnaire which will be designed to investigate the awareness of Decision-making for the choice of restorations for endodontically treated teeth among dentists. The questionnaire will be including questions related to suitable restorative material for ETT. Results: Tooth colored composite was considered to be the material of choice among the dentists when more than half of the natural tooth structure remained after endodontic treatment. Tooth colored composite & prefabricated post with tooth colored crown, both were equally preferred if less than 50% of the tooth structure was remaining. There was no statistically significant difference among the preferred restorative materials in both general dental surgeons and specialists. Majority of the dentist considered that post make effect on the esthetic outcome of the anterior teeth and they consider mechanical stresses while restoring anterior or posterior teeth with restorations. Composite was considered to be the material of choice among both the specialists and general dentists as well. Conclusion: Within the limitations of this study and the results following conclusions can be drawn. Composite is considered to be the material of choice among the general dentists and specialists as well. Despite the slight variations, there were no statistically significant differences found between the preference of materials used in the endodontically treated teeth between general dental surgeons and specialists.
Introduction
The restoration of endodontically treated teeth which are mostly effected by caries, fracture or multiple restorations is an integral part of restorative dentistry 1 . The primary goal of endodontics followed by restoration is to restore normal function and esthetics as well 2 .Studies have proven thatthe major cause of endodontic failure is primarily not the endodontic failure itself but the restorative failure itself 3 . One of the major cause of restorative failure is endodontic failure due to micro leakage from the coronal restoration which causes an overall failure of the treatment [4][5] . The restorability of a particular tooth should be assessed before the start of endodontic treatment. There are multiple factors which should be kept in mind before initiation of the treatment and the formulation of a treatment plan such as position of the teeth in the arch, crown/root ration, mobility status, existing prostheses and the type of occlusal guidance [5][6][7] .The restorative option varies according to the amount of remaining tooth structure and the disused contributing factors as well. It is very important to select which restoration is more suitable for a particular case. There is a variety of restorative materials available such as amalgams, composites, Glass Ionomers and Resin modified GIC which are mostly used directly [6][7][8] . The indirect restorative options could be Ceramics, Metal ceramics, cast gold alloys and base metal alloys which are fabricated in the dental laboratory and cemented 9 . Multiple techniques for the use of direct restorations have been claimed to prevent micro leakage underneath the restoration and to provide maximum strength to the overall restoration 10 . Pre-fabricated and custom made post are available for the restoration of ETT [11][12] . The primary function of post is to hold the core material not to provide strength to the overall restoration. At least 2mm of the ferrule is recommended for the overall success of the restoration and higher fracture resistance 13 .
The objective of the study is to find out a more suitable restorative material for EET. To provide awareness among dentist regarding choice of restoring material and find out reason for the failure of restoration. The methodology of this study is a one page questionnaire which will be designed to investigate the awareness of Decision-making for the choice of restorations for endodontically treated teeth among dentists. The questionnaire will be including questions related to suitable restorative material for ETT.
Material and Methodology
All dental practitioners at King Khalid University College of Dentistry were enrolled in this study. A pre-tested and validated self-administered questionnaire was used to investigate the awareness and practices regarding restorative materials in endodontically treated teeth. And Decisionmaking regarding restorations for endodontically treated teeth between dentists regarding of suitable restoration in ETT The questionnaire included questions relating to suitable restorative material in ETT. When there is more than or less than of half the tooth structure is remaining in posterior teeth and same thing in anterior teeth , is the dentist careful about mechanical stress while choosing restorative material for ETT , Which filling materials get more failure in ETT , the primary reason for restorative failures for ETT in endodontically treated teeth.
Results
A total of 121 dental practitioners were included in the study among which 101 (83.5%) were general dental surgeons while the rest were specialist dental surgeons. General dental practitioners were in their first year of dental practice while specialist prosthodontists had atleast five years of experience after post-graduation.
Tooth colored composite was the preferred restorative material for endodontically treated tooth when more than 50% of the tooth structure is remaining among both general dental surgeons and specialists, however, tooth colored composite &prefabricated post with tooth colored crown, both were equally preferred by specialists if less than 50% of the tooth structure was remaining. Tooth colored crown was the most preferred restorative material among general dental practitioners if 50% of the tooth structure was remaining compared to prefabricated post with tooth colored crown being most preferred material among specialists. There was no statistically significant difference among the preferred restorative materials in both general dental surgeons and specialists.
Majority of general dental surgeons (63.4%) and specialists (40%) thought that post make effect on the esthetic outcome of anterior tooth depends on the remaining tooth structure. Furthermore, majority of general dental surgeons (71.3%) and specialists (75%) considered mechanical stress while considering the restoration of anterior tooth. Similarly, majority of general dental surgeons (87%) and specialist (85%) considered mechanical stress while restoration of posterior tooth. According to both general dental surgeons and specialists Composite was the preferred core material used in ETT while GIC was thought to associated with greater failure in ETT patients.
Discussion
The main purpose of the study was to find a most preferred restorative material for the restoration of anterior and posterior teeth in case when more than half of a tooth structure or less than half of tooth structure is remaining after endodontic treatment. Tooth colored Composite and tooth colored crown were preferredas the material of choice both by general dental surgeons and specialists as well without any significant variations. According to the results of the study considering the effect of post on esthetic outcomes its selection depends on the amount of remaining tooth structure after endodontic treatment. Mechanical stress should be taken under consideration when restoring both anterior and posterior endodontically treated teeth according to the results of the study.
Due to the endodontic treatment or previous carious lesion the endodontically treated tooth becomes week and prone to fracture as well 1 . There are multiple factors which should be kept in mind before proceeding towards the final restoration otherwise it could end up in a failure 1-2 . According to the previous studies it has been supported that when more than half of the tooth structure is present after ETT conservative line of treatment should be taken under consideration with GIC, Composite or resin composites [1][2][3] . When more than half of the tooth structure is lost after ETT tooth colored crown is the preference 2-4 . Post could be prefabricated or cast post but the prefabricated posts are widely used [5][6] . The role of endodontic post is to retain the core material only 7 . 2mm of ferrule is necessary for the tooth restoration with crown [7][8]9,10 . Mechanical stress should be taken under consideration while restoring ETT either anterior or posterior teeth 10,11-12 . This has been supported by the present study as well.
More studies should be conducted on a larger scale regarding the material preference and awareness of the contemporary composites, GIC, other revolutionary materials and the treatment options for the restoration choices.Their clinical implication should be taught in the under graduate and post graduate level so that a line could be drawn for the selection of the restorative material according to the clinical scenario and decision making could be done easily.
Conclusion
Despite the slight variations, there were no statistically significant differences found between the preference of materials used in the endodontically treated teeth between general dental surgeons and specialists. | 2018-10-09T00:19:02.510Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "f3335aed957d4d9dd33e94b8bcb5cc98cf76f830",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/art20178601",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "433374dc3bbb988bb59912c15ea2f04472bb7808",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
} |
5085871 | pes2o/s2orc | v3-fos-license | Marek’s disease virus and skin interactions
Marek’s disease virus (MDV) is a highly contagious herpesvirus which induces T-cell lymphoma in the chicken. This virus is still spreading in flocks despite forty years of vaccination, with important economical losses worldwide. The feather follicles, which anchor feathers into the skin and allow their morphogenesis, are considered as the unique source of MDV excretion, causing environmental contamination and disease transmission. Epithelial cells from the feather follicles are the only known cells in which high levels of infectious mature virions have been observed by transmission electron microscopy and from which cell-free infectious virions have been purified. Finally, feathers harvested on animals and dust are today considered excellent materials to monitor vaccination, spread of pathogenic viruses, and environmental contamination. This article reviews the current knowledge on MDV-skin interactions and discusses new approaches that could solve important issues in the future.
Introduction
Marek's disease virus (MDV), or Gallid herpesvirus 2 (GaHV-2) is the etiological agent responsible for Marek's disease (MD) in the chicken, a multifaceted disease most widely recognized by the induction of a rapid and extensive malignant T-cell lymphoma. MD has been shown to occur worldwide according to data from the world Organization for Animal Health (OIE), although data are difficult to obtain because this disease is not a notifiable disease. MD results in substantial economic losses, estimated at more than 1 billion per year [1]. Although MD was described in 1907 by Joseph Marek, the virus (MDV) was only isolated in 1967 in the United Kingdom [2] and the United States [3] independently. MDV belongs to the family of Herpesviridae, the subfamily of Alphaherpesvirinae, and the genus Mardivirus (for Marek's disease-like viruses). MDV was initially classified within the Gammaherpesvirinae due to its biological properties, but was reclassified in 2002 (after the complete sequencing of its genome) in the new Mardivirus genus, for which it became the type-species [4]. To date this genus comprises 4 other species: the Gallid herpesvirus 3 (GaHV-3), the Meleagrid herpesvirus 1 (MeHV-1)commonly known as herpesvirus of turkey (HVT), the Anatid herpesvirus 1 and the Columbid herpesvirus 1.
GaHV-3 and HVT infect domestic fowls like MDV, but are not pathogenic.
MDV is the first oncogenic virus for which an effective vaccine has been developed, in the late sixties [5][6][7]. In the early seventies, when large scale vaccination started in poultry houses, MDV was responsible for a large mortality and morbidity. Since this time, vaccination has allowed the thriving industrial production of eggs and poultry meat. All the currently used vaccines are live vaccines derived from the three viral strains: the HVT FC126 strain [7], the GaHV-3 SB-1 strain [8], and the GaHV-2 CVI988/Rispens strain [9]. HVT and SB-1 vaccines are considered heterologous vaccines because they are derived from a different viral species than the virus it is intended to protect against, while the Rispens vaccine is considered homologous because it is from the same viral species as the targeted virus.
Marek's disease virus
Herpesvirus infectious particles comprise more than 30 different proteins, assembled according to a complex architecture including the following: (i) a central capsid containing the viral genome, (ii) a protein layer termed tegument, comprising more than 15 proteins, and (iii) a lipid bilayer in which about 10 envelope glycoproteins are anchored. The MDV genome is a linear doublestranded DNA of approximately 175 kb, which contains a unique long (UL) sequence and a unique short (US) sequence, both flanked with terminal repeat (TR) and internal repeat (IR) sequences [4]. Owing to its structure, this genome belongs to group E, like the human herpesvirus 1 (HHV-1 [4]. However, some genes are specific to MDV, such as the gene encoding Meq oncoprotein or pp38 phosphoprotein [4].
To date, MDV replicates efficiently only in primary chicken or duck cells in culture [2,3], yielding titers between 10 5 and 10 7 pfu/mL depending on the strain. MDV infections are performed by co-culturing infected cells with naïve cells because the virus cannot be purified as cell-free virus from cell lysates or culture supernatants. These different characteristics constitute constraints for vaccine production.
Pathophysiology of Marek's disease
The current model of MD pathophysiology was initially proposed by Bruce Calnek [10,11]. This model is described in Figure 1. MDV enters via the chicken respiratory tract after inhalation of contaminated dust. Then MDV infects B lymphocytes and macrophages in the Figure 1 Pathophysiology of Marek's disease (adapted from Calnek model [10,11]). Marek's disease virus (MDV) enters into the chicken through the respiratory tract. MDV has a tropism for B-and T-lymphocytes as well as for the feather follicle epithelium, from which MDV is shedded into the environment. Feathers, skin danders and dust are the major source of MDV infectious materials and the basis of horizontal bird-to-bird transmission in field conditions. lungs [12] and is then transported towards the main lymphoid organs (bursa of Fabricius (see lexicon), thymus, and spleen). After replicating in B lymphocytes, MDV infects activated T lymphocytes, mainly CD4+ cells. It is believed that only a few T lymphocytes undergo transformation and are at the origin of the T lymphoma, which may be either monoclonal or oligoclonal [13]. This lymphoma is mostly localized in visceral organs (kidneys, spleen, liver, gonads, and proventriculus), peripheral nerves, skin, and muscles. In most transformed T lymphocytes, the virus is in latent phase and does not produce viral particles. Only a small proportion of tumor cells (< 0.01%) expresses lytic viral antigens and contains viral particles detectable in transmission electron microscopy (TEM) [14]. Of note, MDV only enters latency in lymphocytes but not in neurons, like most alphaherpesviruses. Early during infection, the virus is transported towards the skin, most specifically to feather follicles. From infected feather follicles, MDV is shed into the environment via scales and feather debris, which become the major source of contamination of other birds in the natural environment. Bird-tobird transmission is exclusively horizontal. There is no vertical transmission from the chicken to the egg, even though the embryo can be experimentally infected [15]. In typical housing conditions, it is believed that animals become contaminated at a young age. MDV interactions with chicken skin is considered the major cause of MDV persistence in poultry houses and its evolution towards increasingly more virulent genotypes has been observed for the past decades [16,17]. To this end, in this review we present the current state of knowledge of MDV interactions with chicken skin. For other aspects of MDV biology, we refer the reader to other reviews [18][19][20].
1. Chicken skin structure
In vertebrates, the skin is the first layer of protection against the external environment. The skin plays an important role in thermal, hygrometric, and chemical regulation. Bird's skin differs from that of mammals by its thinness, by the presence of feathers instead of hair, and by the absence of sebaceous glands, although the overall histological structure is similar [21,22]. Bird's skin is composed of an epidermis separated from a dermis by a basal membrane (Figures 2 and 3). Table 1 presents the cell markers mentioned in this review that are used to characterize the various skin layers; these markers are generally defined by their homology to that of mammals, based on their DNA sequence. The basal membrane is a thin and continuous layer which serves as a molecular filter and anchoring point for the epidermis basal cells via hemidesmosomes. This extracellular matrix is mainly constituted of type IV collagen and proteoglycans. Bird dermis is relatively thin compared to that of mammals. It is mainly constituted of connective tissue arranged in a superficial layer (or stratum superficiale) and a deep layer (or stratum profundum). The dermis can be identified by the expression of cell markers such as fibronectins. The epidermis is a multistratified, keratized squamous epithelium, whose thickness varies depending on the region of the body. The epidermis deep layer (stratum germinativum) is composed of live cells arranged in three layers: the basal, intermediate, and transitional layers ( Figure 2). The basal layer, which is next to the basal membrane, is constituted of small undifferentiated cubic cells, which have a high dividing rate and which migrate towards more superficial layers. The basal layer can be identified with cell markers such as basonuclin 2, keratins 5 and 14 [23,24] ( Figure 2B). The intermediate layer is constituted of cubic cells that have migrated from the basal layer. The bird's intermediate layer is similar to mammal's spinous layer. The intermediate layer can be detected via the expression of transglutaminase 5 or desmoglein 2 [25]. The transitional layer is constituted of two or three layers of flat elongated cells containing a large number of intracellular lipid vacuoles or droplets, which is typical of bird's skin. This layer expresses keratin 10 and 75 (alpha-keratin KIIB) [24,26]. The external layer of the epidermis or cornified layer (also called stratum corneum) is composed of corneocytes, which are flat dead anucleated keratinized cells organized in sheets. This layer can be identified by the presence of involucrin, loricrin, or filaggrin [27] ( Figure 2B). The differentiation of basal cells in corneocytes is a normal physiological process in the epidermis. The main cellular modifications are the loss of organelles, the formation of lipid vacuoles and keratin fibers in the cytoplasm and a thick envelope under the plasma membrane [22]. Corneocytes, which detach regularly from the epidermis, are constantly renewed by the cells from the lower layers. This process called exfoliation or desquamation results from the loss of desmosomes between corneocytes.
As in mammals, chicken epidermis contains dendritic cells (Langerhans cells), whose number is estimated at 8000 per mm 2 of epidermis in an 8-week chick [28,29]. These two studies were conducted in the apteric areas (see lexicon) of the skin that have no feathers. Following antigenic stimulation, these cells seem to migrate to dermal lymphoid nodules, and not to lymph nodes that are absent in birds [29]. Besides feathers, bird's epidermis contains melanocytes, including in non-colored chickens. The "silky-chicken" strains, which have a dark skin and white or black feathers ("white silky" or "black silky"), are the only strains that also have a large number of melanocytes in the dermis and in the connective tissue of deep organs [30].
barrier. Feathers are the most complex and most diversified integumentary products found in vertebrates. Feathers are exclusively constituted of ß-keratin [31] and arise from the feather follicle. The feather follicle forms by invagination of the epidermis around the feather filament cylinder into the dermis, at day 14 of embryogenesis, which lasts 21 days in chickens [32]. There are as many feather follicles as there are feathers on the skin, i.e., between 20 000 and 80 000 depending on the bird species [32]. At the base of the feather follicle are located the dermal papilla, the epidermal collar and the collar bulge ( Figure 3). Follicle stem cells, which are located in the collar bulge, give rise to a population of transient amplifying (TA) cells, which allow the renewal of the feather and the follicle after molting or after accidentally plucking the feather [33,34]. Repeated molting ensures the regular renewal of bird feathers throughout its lifespan.
Feather follicles contain melanocytes responsible for the color of the feathers, as well as melanocyte stem cells, which were recently identified by the Chuong laboratory [35]. In a regenerating follicle, melanocyte stem cells (pigmented or not) are located in the epithelium, above the dermal papilla, in the lower part of the bulge. In a resting feather follicle, melanocyte progenitors move into the dermal papilla, where they remain quiescent [35].
At the feather level, pulp cells originate from the dermal papilla cells, while all other cells derive from the epidermal collar and the collar bulge [32]. The base of the feather is vascularized by an arteriole which goes through the dermal papule and the pulp of the feather (Figure 3).
Feather follicles support the excretion and horizontal transmission of MDV
It has been known since 1963 that, in natural conditions, disease transmission is airborne [36,37], suggesting that the virus is excreted and relatively resistant in the external environment. Moreover, the observation of cutaneous lesions in birds with MD and the detection of MDV antigens via immunofluorescence in feather follicles led early on to the suspicion that feather follicles were involved in the excretion of the virus [38]. In 1970, it was shown that dust, scales, and feather debris collected in infected poultry houses could lead to MD after intraabdominal administration to chicks or after introduction in the confined environment of healthy chickens [39,40]. The presence of infectious virions in the skin and feather follicles of infected chickens was confirmed a few months later by the teams of Calnek and of Nazerian [41,42]. To this end, skin or feather tip homogenates of infected chickens were observed using negative TEM. When administered to healthy chickens this material was capable of reproducing MD. These findings demonstrated that the feather follicle can produce complete mature infectious virions, harboring a tegument and an envelope. Still today feather follicles constitute the only biological material that allows the extraction of enveloped infectious virions and transmission of the infection in the absence of associated cells. The infectiousness of MDV in the environment can last up to 7 months at room temperature [43] and 16 weeks in litters [44], a duration that is unusual for a herpesvirus. These findings suggest that infectious viral particles are probably not in direct contact with the environment but physically protected from degradation, possibly by cellular material (see the section regarding viral morphogenesis below).
Methods for MDV detection in the skin or feathers (diagnostic methods)
In this paragraph we will only cite the methods that were applied to MDV detection in the skin and/or feathers. Until the 1980s, these methods were aimed at detecting viral antigens by immunofluorescence on tissue sections [38], or by gel immunodiffusion or ELISA from feather tip cell extracts [45,46]. In the 1960s and 70s, these antigens were detected using the serum of infected chickens. Today, polyclonal serums and monoclonal antibodies against single viral proteins are also available. TEM has been used to visualize viral particles in situ in the skin or in tissue extracts (see section on viral morphogenesis below). Since the 1990s, new methods based on molecular biology techniques have appeared, enabling mardivirus genome detection (PCR) [47,48] and quantification (qPCR) [49][50][51][52]. It is also possible to detect viral DNA in feathers by pulse field gel electrophoresis (PFGE) [53] or via in situ hybridization [54]. An inexpensive and rapid method of amplification of the viral genome called LAMP (loop-mediated isothermal amplification) has been recently developed to allow rapid diagnostic in field conditions [55]. The substantial improvement of sequencing techniques has also allowed the direct sequencing of viral DNA extracted from feather tips to detect coinfections for instance [56]. Moreover, PCR methods allow the detection and quantification of viral DNA in dust collected and concentrated on filters [51,57,58]. Finally, MDV can also be re-isolated from feather pulp via co-culture in vitro [14]. To this end, the pulp is extracted from the base of the feather, digested using collagenase, and the resulting cell suspension is incubated with a monolayer of permissive cells.
For the past years, feathers and dust have been considered the material of choice to follow the evolution and distribution of pathogenic and vaccine strains of Mardiviruses in poultry houses [59]. Four to five pulp-rich feathers, preferably collected on the axillary tract, are sufficient to detect viral DNA using qPCR (S. Baigent personal communication).
MDV replication in feather follicles
Regarding viral antigen expression, the epithelium of feather follicles is the tissue the most commonly found positive in infected chickens, compared to other tissues [38,45]. It is also the infected tissue that expresses the highest level of viral antigens for the longest period of time. These antigens are located in the upper layers of the stratum germinativum of feather follicles (Figure 4). Viral antigens are detectable in feather follicles from feathers tips 11 to 14 days post-infection (pi) using standard biochemical methods [60,61]. With more sensitive methods such as qPCR, viral DNA can be detected as early as 6-7 days pi in feather tips and in dust collected in isolation units [42,62,63]. A recombinant virus encoding the tegument gene UL47 fused with mRFP (monomeric Red Fluorescent Protein) allows the detection of lytic viral infection in feather follicles using fluorescence as early as day 8 pi [63]. The difference between the detection of the viral genome and its expression is due either to the difference in method sensitivity or to the delay between viral replication and the accumulation of late viral proteins to a sufficient level. The kinetics of replication of mardiviruses in feathers has been found to vary depending on the virus strain [63]. These variations do not seem to be directly linked to the strain's virulence, as it was formerly believed [42]. In fact, non-virulent strains can be detected in feathers and dust as early as highly virulent strains, and can even be excreted at higher levels [62,64]. It is noteworthy that excretion of MDV strains increases considerably from 7 to 28 days pi, reaching a plateau thereafter, according to quantitation experiments of viral genomes conducted on dust in isolation units [62]. Moreover, there is a strong correlation between the quantity of the MDV genome measured in feathers and dust [57].
Coinfection of birds with two pathogenic strains (regardless of their similarity of genotype or pathogenicity) leads to the replication of both viruses within the same feather follicle. This was demonstrated in several studies on feather follicle sections using fluorescence or immunohistochemistry utilizing viruses that have different antigenic markers or expressing different fluorescent reporter genes (e.g., GFP and mRFP) [56,65]. Jarosinski also showed that two fluorescent viruses with the same genotype can infect the same feather follicle cell [65]. This suggests that genetic recombinations between two different genomes could occur in the feather follicle to yield new strains. However, analysis of the frequency and distribution of two viral genomes after coinfection at different times pi by pyrosequencing has shown that some strains may preferentially replicate in feather follicles when compared to other strains [56].
MDV tropism for feather follicles -hypotheses
The mechanisms by which MDV infects skin and feather follicles are poorly understood. Because B and T lymphocytes are the major targets of MDV and are infected early on [10,12], it is probable that these cells are the vehicle to feather follicle infection. However, this has not been formally demonstrated; therefore the involvement of other blood cells (e.g., macrophages and/or dendritic cells) cannot be excluded. In addition, for most pathogenic strains, replication starts at 1 week pi in the feather follicle, well before tumor development. Therefore, it is probable that it is not transformed cells that migrated into the skin, as at this time, there are no or very few transformed cells.
Regarding how the virus reaches the transitional layer of the feather follicle epithelium, many questions remain unanswered: Why is the virus mainly present in the epidermis of feather follicles and not in the epidermis of the whole skin? Is the epidermis infected directly or indirectly, via the dermis? Does the virus directly infect the epidermis upper layers, or does it enter the basal layer first and then replicate only when those differentiate? How does the virus cross the basal membrane?
Various speculative scenarios can be proposed: (i) "cargo" infected cells (lymphocytes or other) infiltrate the skin epithelium to transmit the virus to the upper epithelial cells of the epidermis, and the virus propagates to other neighboring cells and so on; or (ii) lymphocytes infiltrate the dermis or the dermal papilla, infects neighboring cells such as fibroblasts or melanocyte precursors, which in turn transmit the virus to the basal epithelial cells of the epidermis, in which case it requires MDV to cross the basal membrane; or (iii) lymphocytes directly infect the follicle stem cells located in the bulge of the feather follicle, and the infection spreads widely to TA cells (see section on feather follicle above) that are involved in the repair of the follicle wall and the feather during feather regeneration, a process that occurs frequently at a young age. The development of new techniques and methods such as transgenic chickens harboring fluorescent transgenes in specific cell lineages (lymphocytes or dendritic cells for instance), methods enabling the in vitro culture of chicken skin that mimic a multilayer epithelium, and biphotonic imaging on thick tissue should help answer these questions in the near future.
Impact of host genetics on MDV replication in the skin
All lines of Gallus gallus, including exotic ones [66,67], seem susceptible to MDV infection. Interestingly, in poultry houses, MD similarly affects chicken breeds for meat production and those for egg production, even though these two types of productions may not be equally affected in some countries due to breeding practices. Although some chicken genetic markers have been shown to be involved in the susceptibility or resistance of chickens to tumors [68,69], no marker so far has been shown to regulate viral production in the epithelium of feather follicles. Further research in this area may help reduce or block the excretion and spreading of pathogenic MDV strains.
Two chicken lines with mutations that affect their normal skin physiology, have particular patterns of skin interaction with MDV that merit attention. The first line is the "scaleless" line, which carries a recessive autosomal mutation sc (for "scale") and produces "naked" chickens lacking scales on their legs and harboring only a few sparse feather follicles. Administration of skin cell extracts from scaleless chickens 29 days after infection with a hypervirulent strain (686) to naive chickens indicates that epithelial cells not associated with feather follicles are capable of transmitting infection and producing infectious viral particles [70]. In that study, however, no result was presented regarding the ability of these birds to transmit MD horizontally to susceptible chickens or to chickens of the same genotype to determine whether these animals excrete infectious virions in the environment.
The second line is the Smyth (SL) line, which has colored feathers similarly to the Brown Leghorn; this line spontaneously develops an autoimmune disease leading to a depigmentation of regenerating feathers, which become white due to the death of melanocytes. This line is considered as an animal model for human vitiligo [71]. The depigmentation of SL birds occurs between 6 and 14 weeks of age in 70% to 95% of the birds. When birds were moved from one university to another, the phenotype was only observed in 10% of the population, suggesting the role of the environment in addition to genetic factors. To determine the reasons for this difference, the environment and breeding conditions in the two animal facilities were compared. Among three important differences, vaccination of the birds against MD using the heterologous HVT appeared to be the most important factor. Indeed, it has been shown that 20 week-old birds vaccinated with HVT had an incidence of vitiligo 4 times higher than non-vaccinated birds [72]. This puzzling result raises various hypotheses regarding the impact of HVT vaccination on the development of vitiligo, knowing that the HVT vaccine also penetrates and replicates in feather follicles [51,59]. In the SL line, depigmentation is associated with melanocyte death and with the presence of anti-melanocyte auto-antibodies; therefore, one hypothesis would be that HVT infects melanocytes or their precursors, leading to their death and triggering an auto-immune response against melanocyte markers. In other genetic backgrounds, infection of these cells may have no impact on feather color and remain unnoticed. However, the ability of chicken melanocytes or their precursors to become infected by MDV has never been reported to date.
Cutaneous lesions after MDV infection
Macroscopic and microscopic lesions have been observed on the skin of infected chickens at the feather follicles or near them. Two types of lesions have been found: tumor-like and non tumor-like lesions. It is noteworthy that it was the tumor-like cutaneous lesions (often incorrectly called cutaneous leucosis) observed in the slaughterhouse that led to the initial suspicion that the skin was the main infected tissue in MD [73]. Birds presented hypertrophied feather follicles with compact lymphoid aggregates in the dermis associated with capillaries upon microscopic examination. The presence of MDV in tumor-like lesions was subsequently confirmed by isolation of the virus in culture [74]. However, in situ, these cells do not generally harbor viral antigens detectable by immunofluorescence [75], and therefore appear to be latently infected-tumor cells. Interestingly, cutaneous tumors with large accumulations of lymphoblasts expressing the viral oncoprotein Meq have been observed in the dermis of scaleless chickens, suggesting that the presence of feather follicles is not required for the development of skin tumors [70]. Among non tumor-like lesions are the nuclear inclusion bodies typically found during lytically herpesvirus infections. These nuclear inclusions are only found in the upper layers of the feather follicle epithelium, and never in the basal layer [42,45,75]. The lesions are associated with the presence of viral antigens. Analysis of the distribution of feather follicles positive for MDV antigens and lymphoid cell aggregates shows that these two features are associated [75], and suggests that lymphoid cells could be the source of feather follicle infection, although this has not been demonstrated. Macroscopic and microscopic lesions associated with the presence of MDV antigens have been described in cutaneous structures other than feather follicles, including the comb, barbs, and leg skin that harbors scales without feathers [76]. For more details on these skin lesions, we refer the reader to two reviews [20,77].
Atypical morphogenesis of MDV in the skin
All herpesvirus infectious particles have similar morphology, which consists of an icosaedric capsid containing the viral genome surrounded by the tegument and envelope. The particle, whose size differences depends mostly on the tegument's thickness, is 200-250 nm in diameter for the type-species viruses, HHV-1 and PRV (pseudorabies virus) [78]. The particles are the result of a complex assembly, also termed viral morphogenesis, which follows three models. The most common model is that of envelopment-deenvelopment [79][80][81][82][83]. The assembly starts in the nucleus where the genome is incorporated in the capsids. These mature capsids, also called type C capsids, are then transported in the cytoplasm after budding at the inner nuclear membrane and fusion with the outer nuclear membrane. The envelopment-deenvelopment process creates a primary enveloped particle in the perinuclear space during the intermediate step. After reaching the cytoplasm, the capsids bind to tegument proteins and are reenveloped by budding into a membrane-bound organelle, probably the trans-Golgi network. Mature enveloped particles are then released in the extracellular medium by exocytosis. For the type-species alphaherperviruses HHV-1 or PRV, the number of mature viral particles in the cytoplasm and in the extracellular medium in various cell types is generally high. For MDV, mature viral particles are scarce in the cytoplasm (approximately 0.5% of total particles) [84] and have never been observed in the extracellular medium in cell culture. It is the same in the tissues of infected chickens, except skin (see review [85]). The skin has been shown to contain many MDV infectious particles in the epidermis of feather follicles in the transitional layer [42,86,87]. In these cells, particles are often within cytoplasmic inclusions constituted of electrondense amorphous material and lacking visible peripheral lipid membranes. At higher magnification, enveloped particles located in these inclusions are 200-250 nm in diameter and do not seem to be surrounded by a second membrane [42], like predicted by the second envelopment process model. Because these two characteristics are atypical of alphaherpesviruses, they raise various hypotheses regarding the mechanism of final envelopment and excretion of the virus into the external medium. Are MDV virions excreted from the keratinocytes via active exocytosis or do they remain trapped in these cells until their final differentiation into corneocytes, and are they excreted passively in the environment by the physiological process of desquamation? These questions remain to be answered. To this date, there is no cell system that allows reproducing in culture the atypical viral morphogenesis observed in this tissue, which is a hindrance to its study as well as the study of other associated cellular determinants and processes.
As mentioned above, in 1970 the teams of Calnek and Nazerian were able to isolate viral particles from the skin of infected chickens and to observe them using TEM [41,42]. To this end, tissues were homogenized in water through freeze-thaw or by sonication. In these conditions, more than 50% of the viral particles observed were enveloped and had a diameter of 273-400 nm [41]. The size of these viral particles seems abnormally high compared to their estimated size in situ in the epidermis; therefore, this may be an artifact of the virus extraction method in hypotonic medium.
12. Viral molecules associated with MDV replication in feather follicles or with the infectiousness of particles excreted from the skin Many studies have attempted to characterize the genes and/or viral proteins preferentially expressed in feather follicles in order to explain the high rate of morphogenesis observed in this tissue. A few viral proteins are expressed at a higher level in the feather follicle compared with that in other cell types in vivo or in culture. For instance, glycoprotein gD (encoded by the US6 gene) which is not usually expressed in chick embryo fibroblasts (CEF) in culture [88], is expressed in 30% to 50% of feather follicles positive for other viral antigens such as pp38 in experimentally infected chickens [61]. The role of gD expression in feather follicles is still unclear because the US6 gene is not required for MDV transmission between birds [89]. Tegument protein VP13/14 encoded by the UL47 gene is also strongly expressed in the epithelium of feather follicles of infected chickens, but is expressed weakly in the spleen and in CEF in culture [90]. However, the relationship between its high level of expression in the feather follicle and the high viral productivity in that tissue has not been investigated.
The major tegument VP22 protein encoded by UL49 also influences MDV horizontal dissemination. Indeed, fluorescent tagging of VP22 in C-or N-terminus abolished or diminished bird-to-bird transmission, respectively [14,90]. In the last case, the MDV genome copy number in feathers was reduced compared to the wild type [14].
To date, no cellular component has been found to be associated with the higher viral replication in feather follicles and specifically to its ability to produce a large quantity of infectious viral particles. The development of new molecular models of keratinocytes permissive to MDV infection in our laboratory may help solve this problem [91].
Excretion of vaccinating strains and pathogenic strains of MDV
The three currently available vaccines (HVT, GaHV-3 SB1, and GaHV-2 CV1988/Rispens) induce a nonsterilizing immune response which protects against tumor development. The Rispens strain is to date the best available vaccine against the most virulent strains of MDV. Because MDV is strictly associated with infected cells in culture, GaHV-2 vaccines are constituted of infected cells frozen in liquid nitrogen, a unique formulation for an antiviral vaccine. In poultry houses, vaccines are administered to 1-day-old chicks manually or in the embryo in ovo, 2-3 days before hatching using an automated injection system. All vaccinating strains replicate in feather follicles and their DNA is detectable in feather tips by qPCR [59,64]. The kinetics of detection of the genome of these strains is similar to that of pathogenic strains. For instance, the genome of the Rispens strains is detectable 4 to 7 days post-vaccination in the feather tips by qPCR [64,92], and the number of genome copies increases to reach 100-fold that measured in other tissues [64]. The number of copies of the Rispens genome at 21 days post-vaccination is highly variable between birds [93]. Whether quantitation of the genome of vaccinating strains in feathers by qPCR allows to evaluate the level of protection of the flock in poultry farms remains to be determined [93,94].
It is well established that vaccination does not block the infection of feather follicles by pathogenic strains and viral production, during experimental infection or in poultry farms [57,62,95]. In the past several years, qPCR methods were developed to discriminate between vaccinating strains and virulent strains. In particular, point mutation in the pp38 gene allowed to distinguish the attenuated Rispens strain from most pathogenic strains in the field [95]. The impact of vaccinating viruses on the replication of pathogenic viruses and vice versa is starting to be elucidated. Several studies have shown an increase in viral load for the HVT genome in feather after MDV infection, suggesting that infection by a virulent virus could increase the replication of the vaccinating virus [62,96]. This has not been observed with the Rispens homologous vaccinating strain [92]. The accumulation of the pathogenic strain RB-1B in feathers is reduced by approximately 10 times after vaccination with the Rispens strain, but its kinetics is not shorter (within the 21 days of the study) [92]. Nair hypothesized that vaccination allows pathogenic MDV strains to unobtrusively spread in poultry houses and could contribute to the evolution of viruses towards more virulent genotypes [17].
Immune response in the skin of MDV-infected chickens
Studies of the host immune response in feathers after infection with a highly virulent virus (like RB-1B) or by a vaccinating virus (like Rispens or HVT) show an increase in the expression of pro-inflammatory cytokine genes, particularly gamma-interferon, as well as an infiltration of CD4+ with or without CD8+ T lymphocytes [97,98]. The above mentioned results suggest that the immune response in feather follicles is relatively ineffective at blocking MDV replication in that tissue and at preventing its excretion in the environment. The cellular and molecular mechanisms that help protect against MDV replication and its excretion from the skin, are currently poorly characterized. Greater knowledge of these mechanisms would substantially help reduce the spreading of pathogenic strains in poultry houses.
Conclusions
In the past several years, the interactions between MDV and the skin have seen a renewed interest. Many studies have helped show that pathogenic viruses are excreted from feather follicles at high levels in the environment despite vaccination. The development of new techniques to measure the viral load from feather tips and dust has been essential to obtain these data. Blocking the excretion of pathogenic MDV is considered to date as a major goal to stop and prevent the evolution of MDV towards more pathogenic genotypes. Fundamentally however, many questions remain unanswered, particularly the molecular mechanisms and cellular components involved in the atypical morphogenesis of MDV in the epithelium of feather follicles leading to high production of infectious virions and environmental contamination.
Lexicon
Barbs: Thick appendages located on both sides of the beak.
Bursa of Fabricius: Primary lymphoid organ specific of birds in which B lymphocytes are generated and selected. B lymphocytes exit from the bursa only at hatching. This organ, which is located on the dorsal side of the cloaca, regresses after 12 weeks of age and completely disappears.
Feather follicle: Region of the skin where a feather is formed and anchored (one follicle harbors one feather). The follicle ensures the renewal of the feather after a physiological or accidental loss.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MC wrote the part related to chicken skin and feather follicles, prepared the figures and table. CD wrote the parts related to Marek's disease virus. Both authors read and approved the manuscript. | 2017-06-19T06:27:00.090Z | 2014-04-03T00:00:00.000 | {
"year": 2014,
"sha1": "b36d00fef2d8d2a614a835de25a3e97fe56fb91f",
"oa_license": "CCBY",
"oa_url": "https://veterinaryresearch.biomedcentral.com/track/pdf/10.1186/1297-9716-45-36",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72afaa8235e06aa24a01ce3e5875231e83c0795c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245673238 | pes2o/s2orc | v3-fos-license | Brief Report: Above and Beyond Safety: Psychosocial and Biobehavioral Impact of Autism-Assistance Dogs on Autistic Children and their Families
Autism-Assistance Dogs (AADs) are highly-skilled service animals trained primarily to ensure the safety of an autistic child by preventing elopement and mitigating ‘meltdowns’. Although anecdotal accounts and case-studies have indicated that AADs confer benefits above and beyond safety, empirical support anchored in validated clinical, behavioral, and physiological measures is lacking. To address this gap, we studied children and their families before and after receiving a well-trained AAD using a within-subject, repeated-measures design. Notably, this study is the first to assess change in a biomarker for chronic stress in both autistic children and their parents. Final analyses included pre-/post-AAD data from 11 triads (parent/handler-dog-child) demonstrating significantly positive psychosocial and biobehavioral effects of AADs.
Introduction
Autism spectrum disorder (ASD), a heterogeneous neurodevelopmental disorder (NDD) comprising lifelong challenges in social, communication, and behavioral domains, has reached an unprecedented prevalence estimate of 1-in-54 in the United States (Maenner et al., 2020). Frequently, treatment plans not only need to address core ASD symptoms, but also a variety of co-occurring developmental, psychiatric, neurologic, or medical diagnoses that further impact daily functioning and quality of life (Masi et al., 2017). One approach with the potential to address a number of concerns for autistic individuals and their families is the incorporation of animal-assisted interventions (AAIs) 1 into home, school, and hospital settings (Dimolareva & Dunn, 2020;Esposito et al., 2011;Johnson et al., 2002); several studies have reported positive effects when human-animal interactions (HAI) have been integrated into ASD therapies (Dimolareva & Dunn, 2020;Droboniku & Mychailyszyn, 2021;Funahashi et al., 2014;OHaire et al., 2013). Anecdotal accounts have also accrued attesting to the benefits of well-trained autism-assistance dogs (AADs) who engage with their human partners on a daily basis. Yet, despite rising interest in the field, the evidence-base for AAIs for ASD remains limited-due, in large part, to considerable variability in research methodologies, implementation, and reporting (Kazdin, 2017;O'Haire, 2013O'Haire, , 2017. Even sparser still are systematic evaluations of whether and how welltrained AADs can impact the lives of autistic children and their families (Butterly et al., 2013).
The primary trained duties of an AAD stemmed from a critical need to prevent child elopement; a foremost concern for many parents of autistic children is that their child may bolt or wander and "expose him or herself to potential danger by leaving a supervised, safe space or the care of a responsible person" (Anderson et al., 2012). One study collected data on missing person cases in the US involving elopement by individuals with ASD across a 5-year period (2011)(2012)(2013)(2014)(2015)(2016) and reported that, of the 808 cases evaluated, 17% resulted in death, 13% required medical attention, 38% carried a heightened risk of bodily harm (i.e., "close calls), and 1% were still considered missing (McIlwain & Fournier, 2017). Trained AAD teams increase a child's safety by working as a triad; in public, the child may wear a specially designed belt that connects to the dog's vest while an adult handler holds the dog's leash. AADs are taught to resist passively with their body weight if their child attempts to bolt and the tethering system keeps the child with their dog. Caregiver and case study reports have related that AADs can prevent elopement effectively while providing a sense of security for both parents and children (Burgoyne et al., 2014;Burrows et al., 2008). In fact, this trained ability to prevent a child with autism from wandering away confers 'service animal' status to AADs, defined by the US Department of Justice as a dog that is individually trained to do work or perform tasks directly related to a person's disability. Service dogs are permitted to accompany people with disabilities in all areas where members of the public are allowed to go (ADA, 2010).
Another troubling issue affecting families of autistic children is the health and well-being of parents/caregivers who report experiencing higher physiological stress, parentingrelated stress, and fatigue than parents of typically-developing (TD) children and children with other NDDs (Baker-Ericzén et al., 2016;Estes et al., 2013;Fecteau et al., 2017;Smith et al., 2009); these experiences may increase parental risk for mental (e.g., anxiety, depression) and physical health (e.g., adrenal, cardiovascular) problems (Foody et al., 2014;Seymour et al., 2012). Myriad factors including child characteristics and behavioral challenges (Olson et al., 2021), as well as sociocultural and economic circumstances (e.g., access to resources, stigma associated with mental health, financial burden of care), can compound to distress parents and affect both child and overall family outcomes by means of transactional pathways (Bonis, 2016;Iadarola et al., 2019;Rodriguez et al., 2019).
Encouragingly, reports of collateral benefits have emerged from families with AADs trained chiefly for safety. One seminal study noted that the contribution of service dogs to family outcomes extended beyond physical welfare to behavioral and psychosocial domains; parents reported that they experienced improved quality of sleep and a greater sense of independence while their children exhibited fewer negative behaviors (e.g., "meltdowns", "tantrums", "bolting") and families overall experienced an increase in social acknowledgement and a decrease in embarrassment or shame in public (Burrows et al., 2008). AADs have also been trained to disrupt potentially harmful repetitive or self-stimulating behaviors as well as provide a modified form of pressure touch therapy practiced by occupational therapists to help autistic individuals reduce levels of arousal and anxiety (Bestbier & Williams, 2017;Grandin, 1992;Krauss, 1987). Further, because simple language is used to work with AADS, children may gain rewarding interactive experiences that then scaffold socialization with other humans (Solomon, 2010). Broadly, these dogs may serve as social catalysts for their human partners by enhancing social interactions, increasing social networks, and reducing instances of social discrimination (Becker et al., 2017;Camp, 2001;Carlisle, 2015;Mader et al., 1989;McNicholas & Collis, 2000).
Yet, while the positive, multidimensional impact of these AADs has been oft reported in anecdotal accounts and case studies, empirical research substantiating these gains is limited. Moreover, documentation is sparse specifying how service dog providers collect outcome data when evaluating the success of their canine placements (Butterly et al., 2013). In order to strengthen the evidence-base for this field, systematic pre-/post-AAD assessments employing validated instruments are warranted. Also, although a handful of studies have been published on the effects of assistance dogs on human psychosocial health and well-being (See Rodriguez et al., 2020 for review), few have focused on dogs trained expressly for ASD. Whereas most adult handler-dog teams (e.g., mobility, seeing, hearing, diabetes) are dyadic, to evaluate the benefits of AADs, we must consider the unique dynamics of the handler-dog-child triad in conjunction with the vast heterogeneity of ASD diagnoses which are often comorbid with other NDDs. Finally, the use of biological measures when possible may provide key objective insights into the long-term effects of having an AAD. To date, however, few studies have included a biomarker measure in their evaluations of AAD success. A review of the extant literature revealed only two such investigations that measured changes in cortisol (salivary), the primary glucocorticoid produced by the activation of the hypothalamic pituitary adrenal (HPA) axis in response to a stressor. Specifically, both studies examined the cortisol awakening response (CAR), a core biomarker of HPA axis regulation related to psychosocial stress and stress-related psychiatric disorders (Fries et al., 2009), and reported decreases of CAR for both parents and children after they receiving trained service dogs (Fecteau et al., 2017;Viau et al., 2010).
The overarching objective of the present study has been to investigate empirically the impact of AADs by collecting psychosocial and biobehavioral data by means of validated instruments designed to better understand the functioning of children and families affected by ASD. In addition to assessment data collected via parent-report (child) and self-report (parent), we included a biological measure of chronic stress in both parents and children to augment our understanding of how AADs may affect physiological health. Chronic cortisol concentrations (CCC) assayed from a single collection of a keratinized matrix (e.g., hair/nails) sample have been shown to represent an accumulation of cortisol secretions over a time frame of months (Meyer & Novak, 2012;Phillips et al., 2021). In contrast, cortisol samples collected from saliva or urine are limited in time (< a few days) and can require 1 3 repeated measurements across 24-hours over several days to obtain average chronic concentrations (Wosuet al., 2013). Comparative studies examining the correspondence of CCC obtained from scalp-near hair segments to 30-day (3 × daily) average salivary cortisol area-under-the curve levels demonstrated strong associations between CCC and prior 30-day integrated cortisol production measures (Fries et al., 2009;Short et al., 2016). Thus, the use of CCC can also reduce the burden of data collection for participants, particularly vulnerable populations, in addition to providing a gauge of chronic stress retrospectively.
To our knowledge, the present study is among the first to assess CCC in autistic children and the only investigation to examine CCC in both parents and children with ASD. Additionally, no previous reports of the effects of AAD have incorporated CCC measures. Critically, we collected data both before and after participants received their dogs so that we would be able to evaluate outcomes within-subjects. Our study objectives were thus to contribute both quantitative and qualitative data from well-validated instruments to address the question of whether children and their families benefit from these human-canine partnerships across multiple domains.
Method
All study procedures were approved by the University of Minnesota's Institutional Review Board and all parents completed informed consent procedures. Participants were informed that their decision to participate would have no bearing on their current or future relationships with the university or the canine training program.
Participants
Using non-probability, purposive sampling, we recruited families from the top of a regional assistance dog training program's (Can Do Canines, New Hope, MN, USA) 3-5-year-long waiting list of applications to receive an AAD.
Can Do Canines
Can Do Canines (https:// cando canin es. org/) is an internationally recognized, Assistance Dogs International (ADI) accredited, nonprofit organization that trains assistance dogs for hearing loss, mobility challenges, seizure disorders, Type 1 Diabetes, as well as ASD in children. Families are provided with the dogs free-of-charge and the economic burden and time-investment for each certified handler/dog team, combined with the assiduous training and placement standards enforced by the organization, limits severely the number of dogs placed each year. Clients of the assistance dog provider receive AADs whose temperaments/talents were carefully matched to families by highly-experienced trainers. Trainers are able select for certain characteristics (e.g., hypoallergenic breeds) and tailor final training to meet the needs of individual families. To apply for an AAD, children (Ages: 2-7 years when applying) must have a confirmed ASD diagnosis, live within the state, and families must be physically and financially able to take full responsibility for the dog after certification (See Fig. 1 for Study Flow Diagram). An age restriction was established to accommodate the lengthy waitlist and the fact that size must be considered if dogs will be trained to prevent child elopement. By the time they are ready for final training, potential AADs may have already had more than 18 months of socialization, general training, assessments, and intensive training specific to their assistance dog careers. Once the match is made, one caregiver undergoes training to become the primary dog handler and works with trainers and the AAD without their child present. When they are ready to have the dog move into the home, trainers then work with the triad (handlerdog-child) together to build their partnerships and skills in everyday life. These AAD teams require approximately 8-12 weeks to complete team training and certification.
Participant Characteristics
Since our potential participant pool was limited to the families who would be receiving an AAD during our period of data collection, our only criteria for inclusion beyond those of the training program were that parents/caregivers be able to provide informed consent and complete questionnaires in English. In total, we enrolled 13 families to participate in the study. Final analyses included data from 11 teams; we were unable to collect post-AAD data from one family and one team experienced a change in family circumstances and had to return their dog. Mean AAD age was 2.9 ± 0.5 years when matched with a family, 45.5% were females, and mean weight was 62.2 ± 7.1 pounds. With the exception of one Standard Poodle, all AADs were Labrador Retrievers, Golden Retrievers, or Labrador/Golden crosses. The designated adult dog handler was the primary parent participant; 100% were mothers, 27% families identified as single-parent households. Secondary parent/caregiver data were collected when possible but were insufficiently powered for further analysis. Formal diagnosis of ASD was confirmed through parent-provided records by the assistance dog organization while additional medical history, including diagnoses of co-occurring neurodevelopmental conditions, was collected via parent-report. All children had a confirmed diagnosis of ASD and 45.5% were non-verbal. Detailed parent and child characteristics are reported in Table 1.
Given that we would not be able to control for heterogeneity in family characteristics and child medical history and treatment, we implemented a repeated measures design that would allow us to examine changes over time within each family. We did, however, ask parents to report ongoing treatments at each assessment and no significant changes in ASD-related treatments between pre-/post-AAD measures were recorded; 36.3% were receiving therapy (e.g., occupational, speech and language, physical, applied behavioral analysis,), 45.5% were receiving therapy and medications, and 18.1% were receiving therapy and "other" treatments (e.g., assistive technology, adaptive sports). We should also note that one common factor amongst the families who chose to remain on the 3-5-year long waitlist for an AAD is a willingness and commitment to bringing an AAD into their lives and the belief that an AAD might be beneficial. Further, families would not likely apply for an assistance dog if their child had known sensory aversions to canines (Grandin et al., 2010) that would preclude meaningful interaction. Applicants were able to make special requests for hypoallergenic breeds but those limitations could lengthen waittimes substantially.
Study Design
Our assessment battery consisted of parent-report (child) and self-report (parent) questionnaires as well as CCC sample (parent and child) collection. We asked participants to complete pre-AAD (T1) measures after being taken off the waitlist and before receiving their dogs. A follow up assessment (post-AAD; T2) was administered 8-12 weeks after teams were certified. Participants were given the option of completing measures remotely or inperson. Paper questionnaires and consent forms were converted to REDCap (Research Electronic Data Capture), a secure, web-based software platform designed to support data capture for research studies (Harris et al., 2009(Harris et al., , 2019. Participants were also given the option to have the researcher collect hair/nail samples in-person or self-collect at home and submit to our laboratory by mail. Two post-intervention time points were included in the original study design. However, due to institutional research and canine training facility restrictions during the COVID-19 pandemic, we were unable to complete all planned data collection. Moreover, we were concerned that results might be confounded by the considerable stress and changes in routine brought on by the pandemic alongside concurrent civil unrest in our regional community. Consequently, we limited our final data set to teams who completed both of their pre-and post-AAD assessments either before (N = 7) or after Spring 2020 (N = 4). In other words, although we continued to collect follow-up data remotely when possible, we decided to only include data in our final analysis if families completed T2 before March 2020 or if they enrolled after Spring 2020. Ultimately, because the training facility was also required to shut down for a period of time, we did not enroll the next new participant family until October 2020. While participants had been given the option to complete procedures remotely/online before pandemic restrictions were put in place, the latter group of participants were offered the remote/online option only. We report herein on data collected from families before receiving their AAD and 8-12 weeks following team certification.
Behavioral/Psychosocial Measures
Behavioral features of children were assessed by having parents complete a pre-/post-AAD battery of questionnaires (see Table 2 for descriptions) including the Social Responsiveness Scale-2nd Edition (SRS-2) (Constantino & Gruber, 2012), the Child Behavior Checklist (CBCL) (Achenbach & Rescorla, 2001), and the Autism Spectrum Quotient-Child (AQ-Child) (Baron- Cohen et al., 2001). In order to gather information about parent/family experiences and concerns, parents also completed the Autism Parenting Stress Index (APSI) (Silva & Schalock, 2012), State-Trait Anxiety Inventory (STAI) (Spielberger, 1989), the Autism Family Experience Questionnaire (AFEQ) (Leadbitter et al., 2018), and the Perceived Stress Scale (PSS) (Cohen et al., 1983). At the second time point, we also asked parents for canine signalment and to respond briefly to some open-ended questions about the AAD's integration into their household.
Biological Measures
To explore AAD impact using a biological measure of chronic stress, we collected samples of scalp hair (posterior vertex) or nail clippings from parents and children for cortisol extraction and analysis by enzyme immunoassay (Cooper et al., 2012;Meyer & Novak, 2012). Although we planned to measure hair cortisol concentration (HCC) only originally, hair collection from some of our initial participants proved to be prohibitively difficult and/or not possible due to lack of scalp hair. Subsequently, participants were also given the option to submit fingernail clippings (Phillips et al., 2021) as an alternative method (Liu & Doan, 2019). Participants provided the same (hair or nail) samples for their pre-and post-AAD measures. Parents were also asked to complete a questionnaire for each hair or nail sample to capture data on hair care and medication use that may affect cortisol assay results (Doan et al., 2018;Hamel et al., 2011). Ultimately, we had to limit our final analysis to the subset of participants from whom we received both pre-/post-AAD samples (Parent, N = 6; Child, N = 5); inclusion/collection of complete datasets was hindered by difficulty with collection, low sample weight, and the presence of steroid medications that may have inflated final concentrations.
Data Analysis
Using SPSS 25.0 (Statistical Package for Social Sciences, Version 25) we conducted Shapiro-Wilk tests to assess data for normality and Wilcoxon signed-rank tests to assess pre-/ post-AAD changes. Both full scale and subscale scores were included when applicable. We used raw scores rather than t-scores for the CBCL and SRS-2 because, at the high end of the distribution, raw scores may be more precise than t-scores (Achenbach & Rescorla, 2001;Constantino & Gruber, 2012). Significance levels were set at alpha = 0.05 (two-tailed). We also examined associations between parent and child data on change in stress and cortisol levels using Pearson correlations.
Chronic Cortisol Concentration
We collected 20-50 mg of scalp hair from the posterior vertex region and stored samples at room temperature in dry and dark conditions (Cooper et al., 2012); hair was then wetted with isopropanol, minced into 2 mm pieces, and washed four times with 0.5 mL of isopropanol at room temperature for 30 s to remove external contamination. For fingernail samples, clippings were collected from all ten fingers and then stored and processed using an analogous protocol. Samples were dried under a nitrogen stream and weighed. Cortisol was extracted with 1 mL of methanol overnight at 55 °C, 1 mL acetone for 5 min, and then 1 mL of methanol overnight at 55 °C one more time (Slominski et al., 2015). Pooled solvent fractions were removed under a nitrogen stream. 1 mL of acetone was added and evaporated under a nitrogen stream to chase off the solvents' remnants. Samples were then dissolved in in an assay diluent, randomly distributed on different plates to avoid a batch effect, and analyzed in duplicate using Salimetrics cortisol enzyme-linked immunosorbent assay (ELISA) (Miller et al., 2013). If readings for a sample differed by more than 10% or if readings were too high due to high concentration, the measurements were repeated; also, 5% of samples were randomly reanalyzed to ensure reproducibility.
Results
Using within-subjects contrasts, we compared measures collected before families received their AAD (T1) and after they had time to complete training and integrate the AAD into their daily lives (T2). Overall, we found significant, positive changes over time for parent, child, and family measures. Complete results are reported in Table 3 and Figs. 2, 3, 4. Given the small size of our sample, we employed Shapiro-Wilk tests to assess normality and found that, overall, our data were not normally distributed. Hence, we opted to use non-parametric tests to compare pre-and post-AAD measures. Specifically, Wilcoxon signed-rank tests revealed reductions in levels of experienced and perceived stress on the: PSS, Z = − 2.361, p = 0.018; APSI, Z = -2.255, p = 0.024; STAI (State), Z = − 2.045, p = 0.041; STAI (Trait), Z = − 2.398, p = 0.016; and the AFEQ (Total Score) Z = − 2.936, p = 0.003. We also found significant improvements in parent-reports of child behavior and ASD symptomatology: AQ-Child, Z = − 2.503, p = 0.012; CBCL (Total Problems), Z = − 2.603, p = 0.009; SRS-2 (Total), Z = − 2.003, p < 0.045.
We also analyzed CCC levels for both parents and children in the subset of participants who provided both pre-and post-AAD hair/nail samples using Wilcoxon signed-rank tests and found that CCC levels were lower at T2 than at T1 in Parents, F(1,5) = 20.852, p = 0.006 and Children, F(1,4) = 30.600, p = 0.005. Inter-plate variability was 2.2% and high, median, low values for final cortisol concentration (%RSD) were 20.35 pg/mg (5.87), 6.85 pg/mg (1.77), and 2.54 pg/mg (0.265), respectively. For the parent and child dyads with complete cortisol data, we detected a correlation in concentration change (T1-T2) significant at the 0.05 level (1-tailed), r(0.822), p = 0.044 indicating a reduction in cortisol levels for both parents and children. We also found a significant correlation (1-tailed) for T1-T2 parental cortisol levels and parental PSS scores, r (0.814), p = 0.047, indicating that reductions in chronic cortisol levels corresponded with reductions in parent perceived stress levels. Also, T1-T2 child cortisol levels were even slightly more correlated (1-tailed) with T1-T2 parental PSS scores, r (0.852), p = 0.034.
Finally, we asked parents to describe briefly their child's relationship with their AAD. While we did not collect enough text to conduct thematic analysis, comments were notably positive and highlighted individual differences in each team. Some examples of parental observations are included below.
Discussion
Our primary study objective was to assess the multidimensional impact of well-trained AADs on autistic children and their families across key domains of function. By recruiting from the top of a wait-list for AADs, we were able to enroll participants shortly before they received their dog, thus allowing us to collect pre-/post-AAD data using a battery of psychosocial and biobehavioral assessments. Our findings provide substantive support for the positive effects of AADs above and beyond their duties as a child's "sentinel of safety" (Burrows et al., 2008). The observed benefits of AADs may not be surprising since young children, as early as 9-months of age, have demonstrated an attraction to animals, often preferring them to inanimate objects (DeLoache et al., 2011;Kahn, 1997;Lobue et al., 2013;Ricard & Allard, 1993). Positive interspecies interactions have also been associated with increased concentrations of oxytocin and decreased cortisol levels in both humans and canines (Handlin et al., 2015;Nagasawa et al., 2015;Odendaal & Meintjes, 2003). During medical procedures, the presence of a companion animal has been shown to reduce a child's physiological arousal and behavioral distress (Nagengast et al., 1997;Vagnoli et al., 2015). Correspondingly, during a laboratory-based stressor, rise in perceived stress for TD children (7-12 years) was buffered significantly by the presence of the family pet dog, relative to children who were alone or with a parent (Kertes et al., 2017). As invaluable sources of socio-emotional support (Melson, 2003), animals may also serve as transitional objects, through which children can transfer their established bonds to humans (Martin & Farnum, 2002). For children especially, dogs provide multisensory experiences and direct feedback in the context of nonverbal actions that may be a Wilcoxon signed ranks test: based on positive ranks b Missing data in responses from one Pre-AAD APSI **p ≤ 0.01 *p ≤ 0.05 †p ≤ 0.10 Table 3 (continued)
Pre-AAD (T1)
Post-AAD (T2) thmann et al., 2009;Redefer & Goodman, 1989). Prior research has suggested that dogs are particularly adroit at eliciting prosocial behavior, acting as social catalysts with humans, as well as reducing physiological arousal and stress in children and adults (Fecteau et al., 2017;McNicholas & Collis, 2000;Viau et al., 2010). Consistent with these findings, our data show significant pre-/ post-AAD improvements for children on the AQ-Child, the CBCL (CBCL Total Problems; Anxious/Depressed, Social Problem, and Attention Problem Subscales; Internalizing and Externalizing Problem Composites), and the SRS-2 (SRS Total; Social Cognition, Social Communication, and Social Motivation Subscales). Parents self-reported significantly reduced stress and anxiety on the APSI, PSS, and STAI (State and Trait) and significantly improved family experiences overall on the AFEQ (AFEQ Total; Child Development, Understanding, & Social Relationships; Child Symptoms-Feelings & Behavior; Family Life Subscales). Both parents and children with pre-/post-AAD CCC data showed a reduction on our objective physiological measure of chronic stress. However, while the majority of outcome measures indicated significant pre-/post-AAD improvements, it is worthwhile to consider those areas that yielded Fig. 2 Pre-/Post-AAD mean score differences on parent selfreport measures demonstrating: A improved family experiences on the AFEQ, B reduction of parenting stress on the APSI, C reduction of perceived stress on PSS; and D reduction of anxiety on the STAI (*p ≤ 0.05; **p ≤ 0.01) trend improvements on the AFEQ (Experience of Being a Parent of a Child with Autism Subscale, p = 0.102) and the CBCL (Somatic Complaints Subscale, p = 0.107) and those measures that returned non-significant results on CBCL Subscales (Withdrawn/Depressed, Rule-Breaking Behavior, Thought Problems) and SRS-2 Subscales (Social Awareness, RRBs). By differentiating between domains that are more or less susceptible to the presence of an AAD, we may be afforded insight into the potential mechanisms of actions subserving the dynamic, ongoing relationships within parent/handler-dog-child triads.
In evaluating how the integration of a well-trained AAD can result in long-term changes in the lives of autistic children and their families, adopting a dynamic biopsychosocial perspective may be useful to contextualize the role of AADs (Gee et al., 2021;Lehman et al., 2017). Within this framework, the AAD's role in preventing a child's elopement may be construed as a continuous interplay between biological, psychological, and social factors within a non-static environment. For example, by consistently and effectively preventing a child from eloping, the AAD helps alleviate some of the acute safety concerns reported by parents/caregivers of autistic children (Bonis, 2016;Burrows et al., 2008;Rodriguez et al., 2019). Over time the increased sense of security and social acknowledgment afforded by the AAD may reduce chronic physiological and psychological stress in parents, improving overall quality of life for the family (Eddy et al., 1988;Mader et al., 1989). Further, parents have reported that having the AAD to support their child enables them to go on family outings, feel more independent, and Pre-/Post-AAD mean score differences on parent-report measures demonstrating improvements (decrease in problem scores or reduction in challenges) on the A, B CBCL, C SRS-2, D AQ-Child (*p ≤ 0.05; **p ≤ 0.01) be more connected socially, processes that can also serve to augment mental health and well-being more broadly (Burgoyne et al., 2014;Smyth & Slevin, 2010).
Limitations
While findings from this investigation provide significant support for the benefits of AADs, the data are limited in a number of ways.
First, we must note the conclusions drawn from these AAD-teams should be considered in view of their highlyspecialized training and stringent certification criteria and may not be generalized across animals described as emotional support, therapy, comfort, or companion animals who have not received comparable levels of training.
Second, we did not include a wait-list control group (families who applied for an AAD but did not receive a dog during the same period of time) or a non-wait-list control group (families from the community who had not applied for an AAD). Given the highly multifactorial nature of each family's individual characteristics, the unpredictable length of time each family might be on the wait-list, and the limited number of AADs available, we decided to constrain the study to a single group, repeated measures design. The additional variability introduced by families who had not applied for an AAD (non-wait-list controls) would render comparison data even more difficult to interpret. Additionally, including a control group from further down the wait-list would require participant families to remain on the wait-list for the duration of the study collection period and we did not wish to interfere with standard operating procedures of the training program. In particular, we did not want study participation to be a factor if an AAD candidate proved to be a good match for a control-family as collecting an appropriately-timed T2 assessment would delay the process of getting the AAD team started. Moreover, families unlikely to receive a dog during our collection period (i.e., bottom of the 3-5 year wait-list) would include a younger cohort of autistic children who would be poorly matched to the active group if we implemented a cross-sectional design. Further, families on the wait-list could not be restricted from introducing, discontinuing, or modifying therapies/medications during the study period; yet, these alterations would inexorably confound comparisons to the active group. While families who did receive an AAD were also not constrained from altering their treatment plans, the training process of becoming an AAD team is quite involved and we surmised that families would not have the time to modify their existing treatment plans substantially; we did not note any significant alterations in child therapies/medications pre-post-AADs in our sample but we could have factored in those changes to our final analysis as needed.
Third, due to the high demand and low supply of qualified AADs, our sample size was expectedly quite small. In anticipation of this limitation, we chose a within-subject design to examine pre-/post-AAD changes for each family; we were able to demonstrate significant, quantifiable changes from T1 to T2. Also, because we were unable to collect all data from the third time point as originally planned, we were precluded from gauging if improvements were maintained long-term. Additional data points could have provided insight into whether continued interaction with AADs would lead to sustained and/or greater/fewer changes over time. For example, while we posit that some AAD effects follow a more protracted time course through indirect pathways, these may not be evident until more time has passed. One putative mechanism entails the proximal reduction of physiological arousal stress and an increase in feelings of physical safety with the AAD that may impact sleep quality distally in time. Several parent participants reported that their children have difficulty sleeping which, in turn, affected their own sleep quality. Sleep deprivation indubitably plays a role in mental health and well-being, which can then impact multiple levels of family systems and behavior (Mihaila & Hartley, 2018). However, several families noted that with the AAD's presence, their children began sleeping through the night, perhaps due to an increased sense of security or their canine's de-arousing capabilities.
Next, we could not control for the myriad variables that may have contributed to changes over the study timeline. For example, we cannot rule out the impact of developmental change over the study months and families maintained their ongoing treatment and medication schedules while participating. Although our T2 data demonstrated significant improvements in participants relative to their T1 data, we cannot be certain that changes were not due to variables such as maturation, concurrent treatments, or unknown environmental factors. Also, most measures were parent-report and parent-self report assessments, which may be subject to response bias. Yet, given that the autistic children in our study were quite young, heterogeneous in their presentation, and all had co-occurring challenges (e.g., non-verbal, comorbid NDDs), it was not feasible to administer an objective task-based or observational measure that would be developmentally appropriate for all participants. We were, however cautiously selective in the measures chosen for inclusion in our assessment battery; all instruments were validated, reliable (see Table 2), behavioral and psychosocial instruments that have been developed for and used widely to evaluate child functioning and development.
Finally, we experienced difficulty when collecting samples for cortisol assay because several participants had very short or no scalp hair. Similar issues when collecting fingernail samples arose because some individuals bit their fingernails or kept their nails quite short. An alternative option that may offset some of these issues in future studies might be the use of toenail clippings to ascertain cortisol concentration. Overall, considering the heterogeneity of our participant families, we are reasonably confident that receiving an AAD, the one consistent change for all families during the study collection period, was a driving factor in positive outcomes. Nevertheless, findings based on our limited sample size must be interpreted with caution.
Conclusions
To our knowledge, the present study is the first to examine psychosocial and biobehavioral effects of assistance dogs trained specifically for ASD using validated and standardized measures of family experience, parental stress, autism symptom severity, and child behavior; these data are also the first to evaluate a biological marker for chronic stress in both children and parents/caregivers. Our findings augment significantly our evidence-base for the benefits of AADs on autistic children and their families across multiple domains.
At present, well-trained assistance dogs, particularly those for ASD, remain a highly limited 'commodity', requiring considerable, often prohibitively high, investment of resources by families and service dog providers. Average wait times for well-trained AADs can exceed 3 years and the estimated total cost to raise and train just one dog can surpass $55,000 (Cooper, 2021;Ensminger, 2010;Konrad, 2009). Further, after team certification, families must assume all financial responsibilities for canine care.
Currently, no health insurance policies cover any of these expenses beyond the possible application of pre-tax healthcare accounts, (Internal Revenue Service, 2020), and most service dog providers require that families contribute at least part of the costs themselves. Additionally, while US federal law mandates access for service animals to all public areas including schools, the Americans with Disabilities Act also requires that the animal be under the handler's control at all times (ADA, 2010). However, because public facilities are not themselves responsible for service animals, schools do not have to provide handlers. AADs are trained to work as part of a triad, and unless an adult dog-handler is available, the child is still prohibited from bringing their AAD to school. Given that these considerable financial and regulatory barriers remain, further work is needed to broaden the scope of our research to more service dog providers and autistic individuals both within the US and internationally. An enhanced understanding of factors contributing to the effectiveness of AADs will serve to refine canine placement procedures and training approaches, with the ultimate goal of increasing availability and accessibility of AADs for families who may benefit substantially from these specialized human-canine partnerships.
Acknowledgments This research was supported by a Grant from the Frank J. and Eleanor A. Maslowski Charitable Trust. We are most grateful to members of the Project to Assess Assistance Dogs (PAAD) Study (https:// paad. umn. edu/) and to Can Do Canines in New Hope, MN (https:// cando canin es. org/) for their partnership and support throughout this project. Special acknowledgment to Alan Peters (Can Do Canines' Founder) for his leadership and vision, and to Can Do Canines' extraordinary team of staff and volunteers for their dedication and commitment to enhancing quality of life for people with disabilities. We are humbled by the strength and resilience of our participant families and their hard-working canine partners, without whom this work would not have been possible. We thank our collaborator, Dr. Kestutis Bendinskas, and his research group in the Department of Chemistry at the State University of New York at Oswego for performing all cortisol extraction and measurement procedures.
Author Contribution AT conceived of the study and performed all aspects of study design, data collection, data analysis, and manuscript preparation.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-01-05T14:21:54.452Z | 2022-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "7daef67778fb64f4ed94b5d7e399625f20d7c2c1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10803-021-05410-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fef5fac6b37eb336da92bc3b820110da31bb326",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235217532 | pes2o/s2orc | v3-fos-license | Management of belantamab mafodotin-associated corneal events in patients with relapsed or refractory multiple myeloma (RRMM)
Belantamab mafodotin (belamaf) demonstrated deep and durable responses in patients with heavily pretreated relapsed or refractory multiple myeloma (RRMM) in DREAMM-2 (NCT03525678). Corneal events, specifically keratopathy (including superficial punctate keratopathy and/or microcyst-like epithelial changes (MECs), eye examination findings with/without symptoms), were common, consistent with reports from other antibody–drug conjugates. Given the novel nature of corneal events in RRMM management, guidelines are required for their prompt identification and appropriate management. Eye examination findings from DREAMM-2 and insights from hematology/oncology investigators and ophthalmologists, including corneal specialists, were collated and used to develop corneal event management guidelines. The following recommendations were formulated: close collaboration among hematologist/oncologists and eye care professionals is needed, in part, to provide optimal care in relation to the belamaf benefit–risk profile. Patients receiving belamaf should undergo eye examinations before and during every treatment cycle and promptly upon worsening of symptoms. Severity of corneal events should be determined based on corneal examination findings and changes in best-corrected visual acuity. Treatment decisions, including dose modifications, should be based on the most severe finding present. These guidelines are recommended for the assessment and management of belamaf-associated ocular events to help mitigate ocular risk and enable patients to continue to experience a clinical benefit with belamaf.
Introduction
In recent decades, significant advancements have been made in the management of multiple myeloma (MM), with several new treatments approved and novel classes of agents being investigated 1,2 . However, MM remains incurable and new and effective therapies are needed 2,3 . At present, patients with MM are treated with three major drug classes: immunomodulatory agents, proteasome inhibitors (PIs), and anti-CD38 monoclonal antibodies (mAbs) 2 . Treatment responses and survival outcomes diminish with subsequent relapses and the prognosis is poor for patients with relapsed or refractory MM (RRMM), particularly patients who become refractory to anti-CD38 mAbs (median overall survival (OS): 6-9 months) 1,2,4 .
B-cell maturation antigen (BCMA) is a receptor specifically expressed on the cell surface of late-stage B cells and plasma cells 3 . BCMA activation induces B-cell proliferation, differentiation, and survival 3 . Considering the selective expression of the BCMA receptor and its impact on latestage B cells, BCMA represents an ideal therapeutic target for plasma cell malignancies 3 . BCMA-targeted therapies under clinical development include antibody-drug conjugates (ADCs), bispecific T-cell engagers, and chimeric antigen receptor T-cell therapies 3 .
Belantamab mafodotin (BLENREP; GSK2857916 (belamaf)) is a first-in-class ADC consisting of an anti-BCMA mAb conjugated to the microtubule inhibitor monomethyl auristatin F (MMAF) 5 . Belamaf eliminates MM cells by a multimodal mechanism of action, including apoptosis, and antibody-dependent cell-mediated antimyeloma responses, accompanied by release of markers characteristic of immunogenic cell death 5,6 . In the pivotal, Phase II, DREAMM-2 study (NCT03525678), patients refractory to immunomodulatory agents and PIs, and refractory and/or intolerant to anti-CD38 mAbs, received single-agent belamaf at 2.5 or 3.4 mg/kg 7 . As of the 13month follow-up, the median duration of response (DoR; 11.0 months) and OS (13.7 months) estimates for the patients receiving the 2.5-mg/kg dose compared favorably with previously reported outcomes in patients with prior exposure to anti-CD38 therapies treated with selinexor plus dexamethasone (median DoR: 4.4 months; median OS: 8.6 months) 8,9 .
Single-agent belamaf (2.5 mg/kg) had a manageable safety profile, with keratopathy (microcyst-like epithelial changes (MECs), changes in the corneal epithelium observed on eye examination with or without symptoms; 72%), thrombocytopenia (38%), and infusion-related reactions (21%) as commonly reported adverse events (AEs) 7,9 . The cornea is the transparent, anterior most structure of the eye and plays an important role in focusing light onto the retina (Fig. 1) 10 . In DREAMM-2, keratopathy (MECs) was typically described as superficial bilateral, microcystlike lesions seen on slit lamp microscopy 11 . In some patients, MECs were first observed in the corneal periphery and progressed to the mid-periphery and subsequently the center 11 . The presence of keratopathy (MECs) in the corneal center tended to correlate with changes in vision, including subjective blurred vision 11 . Similar findings have been commonly described with other ADCs, particularly for MMAF-containing ADCs [11][12][13] . Keratopathy (MECs) observed with belamaf and other ADCs appears clinically distinct from other pathologies that are commonly encountered by corneal specialists 11 .
Keratopathy (MECs) observed on eye examination was frequent (72%; 68/95 patients) and occurred early in treatment (median time to onset: 37 days) 11 . These events often led to dose modifications (dose delays: 47%; dose reductions: 25%); however, only 1/95 (1%) patients receiving the 2.5-mg/kg dose discontinued treatment due to keratopathy (MECs), indicating that patients were able to remain on treatment while these events were monitored 11 . Clinical responses were maintained in over 80% of patients with prolonged dose delays (>63 days, equivalent to more than 3 cycles), suggesting that responses to belamaf are durable despite dose modifications 14 .
Based on the experience in DREAMM-2 and studies of other MMAF-containing ADCs, it is anticipated that patients will recover from corneal events 11 . In DREAMM-2, most patients (77%; 46/60) with a Grade ≥2 keratopathy (MEC) event recovered from their first event, with a median time to resolution of 86.5 days 11 . The majority of patients with Grade 3/4 events (84%; 37/44) either recovered or were recovering as of the last follow-up 14 . In patients with unrecovered Grade ≥2 events at last followup, 45% (14/31) are still receiving treatment or in followup, so monitoring for recovery is ongoing 14 . The remaining patients with unrecovered events (55%; 17/31) are no longer in follow-up due to death, study withdrawal, or loss to follow-up and therefore cannot be monitored for recovery 14 .
Corneal changes on eye examination were not always accompanied by patient symptoms or changes in best-corrected visual acuity (BCVA) 11 . Among all patients, 56% (53/95) had symptoms (e.g., blurred vision or dry eye symptoms) and/or a ≥2-line BCVA decline (in their better-seeing eye). Blurred vision and dry eye events were mainly grade 1/2. Seventeen patients (18%) experienced a clinically significant decline in BCVA to a Snellen score of 20/50 or worse in their better-seeing eye, at least once during or after the treatment period. In patients with normal/near normal vision at baseline, a Snellen score of 20/50 was used as a surrogate marker for a meaningful reduction in visual acuity. The majority (82%; 14/17) recovered as of the last follow-up. The median duration of these events was 21.5 days; therefore, most patients recovered after one assessment interval (conducted every 21 days during the trial). Of the remaining three patients with unrecovered events, one patient is receiving treatment and two patients are no longer in follow-up (one died due to disease progression; one withdrew from study). No patients treated with belamaf to date have had permanent vision changes or loss 15 .
Though ocular/corneal AEs are common with oncology therapeutics, keratopathy (MECs) is a novel treatmentemergent event that should be managed in patients with RRMM 11,16-20 . As described above, only a portion of patients with keratopathy (MECs) in DREAMM-2 experienced symptoms 11 . Given the high frequency and often asymptomatic nature of keratopathy (MECs) observed with belamaf treatment, close monitoring of corneal examination findings and changes in BCVA by an eye care professional is warranted. Therefore, it is important to provide hematologist/oncologists clear guidance on how to identify and monitor these events in patients receiving belamaf that they can use to guide treatment decisions. Here, we provide recommendations for hematologist/oncologists to actively manage keratopathy (MECs) based on clinical experience in DREAMM-2; reference should also be made to local labeling information for belamaf. A multidisciplinary team has been shown to improve care practices in the management of patients with hematological malignancies 21,22 . Therefore, we also propose recommendations to establish and facilitate a close collaboration between hematologist/oncologists and eye care professionals (ophthalmologists and optometrists) that will inform treatment decisions.
Keratopathy and visual acuity (KVA) scale development
Cumulative ocular/corneal safety data and related protocols from DREAMM-2 were reviewed by investigators (all hematologist/oncologists), eye care professionals who performed eye examinations of patients in DREAMM trials, and corneal specialists. Hematologist/oncologists and ophthalmologists also provided their expertise with managing RRMM treatment-related AEs and corneal surface diseases, respectively. The feedback from these experts was synthesized into a novel set of guidelines specific for belamaf-associated corneal events, called the KVA scale (Table 1).
DREAMM-2 study design
DREAMM-2 is an ongoing, open-label, two-arm, Phase II study being conducted at 58 MM specialty centers in eight countries 7 . Full methodological details of DREAMM-2 were previously reported 7 . In brief, eligible patients with RRMM were randomized (1:1) to receive belamaf (BLEN-REP) 2.5 or 3.4 mg/kg every 3 weeks by intravenous infusion over 30 min or longer, on day 1 of each cycle. Patients received treatment until disease progression or unacceptable toxicity occurred.
Full inclusion/exclusion criteria were previously reported 7 . Eligible patients had RRMM disease progression after >3 prior lines of anti-myeloma treatment; and were refractory to both an immunomodulatory agent and a PI, and refractory and/or intolerant to an anti-CD38 mAb. Patients were excluded if they had corneal epithelial disease at screening (other than mild dry eye).
Eye examinations were conducted by an eye care professional at baseline and every 3 weeks during the study 7 . Eye examinations included, at minimum, an assessment of the cornea using a slit lamp and measurement of BCVA. Eye examination findings and changes in BCVA were graded based on the most severe finding per KVA scale. Ocular symptoms (e.g., blurred vision and dry eye symptoms) were collected by the hematologist/oncologist as part of the ongoing safety monitoring on treatment and in follow-up and graded using Common Terminology Criteria for Adverse Events version 4.03 (CTCAE v4.03). Eye examination findings and changes in BCVA were also graded per CTCAE v4.03. Dose modifications (dose delays and reductions) were based on the severity of these events per the KVA scale.
To potentially mitigate corneal events in DREAMM-2, patients were instructed to self-administer prophylactic corticosteroid eye drops (four times daily, starting 1 day pre-dose for a total of 7 days) and preservative-free lubricant eye drops (4-8 times daily during the study) in both eyes 7 . Throughout the study, patients were prohibited from using contact lenses.
DREAMM-2 was performed in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines following approval by ethics committees and institutional review boards at each study site 7 . All patients provided written informed consent before enrollment.
KVA scale development
Given the association of ocular events with MMAFcontaining ADCs, including belamaf, a comprehensive approach was undertaken in DREAMM-2 to ensure the prompt detection and management of belamaf-associated corneal events 7,[11][12][13] . In reviewing the procedures and corneal event safety data from DREAMM-2, along with their expertise gained on the study, trial investigators and eye care professionals, including corneal experts, sought to streamline and further refine the guidance for hematologist/oncologists and eye care professionals who will be caring for patients receiving belamaf. Trial investigators expressed that the guidance provided in the DREAMM-2 protocol for the grading and subsequent management of corneal events based on this grading was difficult to follow. Hematologist/oncologists are generally unfamiliar with ocular AE terminology and the slit lamp examination and would have difficulty accurately grading events without the assistance of eye care professionals. Direction was taken from eye care professionals who assessed patients in DREAMM-2, as well as corneal experts, who advised on simplification of the scale used in DREAMM-2, to allow for more uniform grading by eye care professionals.
KVA scale: recommendations for identifying corneal events
Patients should undergo an eye examination at baseline, within 3 weeks before the first dose of belamaf 23, 24 . Recommendations for follow-up eye examinations differ regionally due to requirements of the relevant regulatory bodies. In the USA, eye examinations must be conducted before every dose, whereas in the EU, eye examinations are only required before the first three treatment cycles. In both the USA and EU, additional eye examinations are required promptly as clinically indicated (e.g., on worsening of ocular symptoms). As belamaf is administered every 3 weeks, follow-up examinations can occur at least 1 week after the previous dose and within 2 weeks before the next dose (ideally as close to the next dose as possible). This recommended timing allows patients some flexibility in obtaining an appointment with an eye care professional and for the outcomes of that examination to be sent back to the treating hematologist/oncologist. We recommend that eye examinations should continue every 3 weeks during dose delays, whether the delay is due to a corneal event or any other non-ocular event.
Eye examinations for patients receiving belamaf should include both slit lamp examination of the cornea and BCVA assessment 23,24 . The slit lamp microscope allows detailed eye examination 25 . All slit lamp examinations should include fluorescein staining to show abnormalities in the corneal surface 25 . Patients who received belamaf may present with superficial punctate keratopathy, MECs, or both. Keratopathy refers to superficial punctate keratopathy (revealed by fluorescein staining), which is a broad term referring to non-inflammatory changes in the outer layer of the cornea. MECs are microscopic deposits that resemble cysts, which may not stain with fluorescein. Pupil dilation, to better assess the health of the retina and optic nerve, is required at baseline, but not at follow-up examinations, unless clinically indicated 26 . Dilation may lead to light sensitivity for a few hours after the eye examination 26 . Therefore, patients should be advised to bring sunglasses to their baseline examination and arrange for travel assistance after the examination. We have previously published guidance for eye care professionals on the appearance of keratopathy (MEC), using slit lamp microscopy 11 .
BCVA is the clarity/sharpness of vision a patient can achieve with correction measured using a Snellen chart 27 . Determining BCVA necessitates refraction, a test that measures the strength of the corrective lens needed to achieve precise focus 28 . Normal vision is considered to be a visual acuity score of 20/20 (if using feet) or 6/6 (if using meters) 27,29 . This means that at 20 feet or 6 meters from the chart, the patient can see what the average, healthy individual can see from that position 29 KVA scale: recommendations for the grading of corneal events To date, a linear relationship has not been observed between the severity of keratopathy (MECs) and changes in BCVA in patients receiving belamaf. Therefore, corneal events should be graded using the KVA scale, based on the worst finding of either keratopathy (MECs) seen on eye examination or BCVA assessment (Table 1) Here, we provide a combined summary of these grades (as used in the USA label/levels of severity are shown in the EU label) and dose modification guidelines in the current US and EU labels. Prescribing physicians should also refer to the guidelines for corneal AE management in their local labeling.
The severity of MECs is characterized by their location as well as density. MECs can start in the periphery of the cornea and in some cases migrate centrally 11 . Based on clinical observation, central changes to the cornea are more likely to be symptomatic and interfere with the patient's vision. Grade 1/mild corneal events are characterized by the appearance of only a few, if any, MECs with a low density (non-confluent), and predominantly (≥80%) located in the periphery of the cornea. Grade 2/moderate MECs are moderately dense (semi-confluent) and predominantly located in the paracentral region of the cornea. Grade 3/ severe MECs have a high density (confluent) and are predominantly located in the center of the cornea.
The worst severity for MEC density or location should be used for grading (Table 1). For example, the observation of semi-confluent (Grade 2/moderate) MECs, predominantly located in the center of the cornea (Grade 3/severe), would lead to this being graded as a Grade 3/ severe corneal event. Grading is also based on the worst finding in the worse-affected eye, since both eyes may not be affected equally. For example, a patient with Grade 1/ mild MECs in their left eye and Grade 2/moderate MECs in their right eye should be managed according to the guidelines for Grade 2/moderate MECs.
Using the KVA scale, Grade 2/moderate and Grade 3/ severe corneal events can also include examination findings of sub-epithelial haze (cloudy appearance in the layer immediately below the corneal epithelium) or stromal opacity (cloudy appearance of the stroma, the middle layer of the cornea). Similar to MECs, central sub-epithelial haze or stromal opacity is more severe (i.e., Grade 3/severe) than peripheral events (i.e., Grade 2/moderate).
The change in BCVA should also be used to determine the KVA scale grade of the corneal event (Table 1). A 1-line worsening from baseline in BCVA on the Snellen chart represents a Grade 1/mild corneal event. Grade 2 (moderate) is a decline in BCVA of 2 or 3 lines from baseline, while Grade 3 (severe) is represented by a >3 line decline on the Snellen chart with BCVA not worse than 20/200 (a Grade 4 event is defined as BCVA worse than 20/200 (6/60)) 23, 24 .
A corneal epithelial defect, defined as loss of corneal epithelium, may result in significant ocular discomfort, visual impairment, and an increased risk of infection 30 . Corneal epithelial defects (as well as more severe events such as corneal ulceration (a defect accompanied by an infiltrate or significant haze) and corneal perforation) are considered Grade 4/severe events 18,23,24 . A BCVA worse than 20/200 (6/60) is also considered the most severe visual event and denotes a Grade 4/severe event 23,31 . In DREAMM-2, one patient in the 2.5 g/kg dose group and two patients in the 3.4 mg/kg dose group experienced a worsening of BCVA to ≥20/200 in their better-seeing eye 7 . All these events recovered to baseline. To date, there has been no permanent loss of vision in patients receiving belamaf.
KVA scale: recommendations for management of corneal events
Belamaf corneal events are managed using dose modifications based on the KVA scale (Table 1) 23,24 . Treatment should be continued at the current dose for Grade 1/mild events without interruption. For Grade 2/moderate events, treatment should be delayed until the event improves to a Grade 1/mild event or resolves; treatment can be then resumed at a lower dose (1.9 mg/kg). Treatment should also be delayed for Grade 3/4 (severe) events until these improve to a Grade 1/mild event; treatment should be restarted at 1.9 mg/kg following improvement of a Grade 3/4 (severe) event; therapy can be reinitiated immediately after improvement to a Grade 1/mild event or after complete resolution.
For Grade 4/severe events, a benefit-risk assessment should be conducted to determine if permanent treatment discontinuation is required. If it is decided that treatment should be resumed following recovery of a Grade 4/severe event to Grade 1 or better, a reduced belamaf dose of 1.9 mg/kg is recommended. Treatment discontinuation should be considered for worsening symptoms that are unresponsive to appropriate management. For patients who have more than 1 dose delay/interruption due to corneal events, the belamaf dosing schedule of every 3 weeks should be maintained once events have improved and treatment restarted (i.e., the dosing schedule should not be modified in these patients).
In DREAMM-2, patients with a history of dry eye were more likely to develop keratopathy (MECs) compared with patients who did not have a history of dry eye 11 . Dry eye disease can manifest as punctate keratopathy and result in damage to the ocular surface 32 . However, the underlying etiology of belamaf-associated keratopathy (MECs) is not yet known 11 . Therefore, it is recommended that patients use preservative-free lubricant eye drops at least four times a day, starting before the first infusion and continuing until the end of treatment (Table 2) 23, 24,33 . Additional supportive care measures may be considered, as recommended by the eye care professional 24 . An eye care professional may advise the patient to use bandage contact lenses, which are soft therapeutic contact lenses designed to serve as a protective barrier for the ocular surface 30 .
Corticosteroid eye drops are not recommended as a management strategy, because these were shown to be ineffective in preventing keratopathy (MECs) in DREAMM-2 7 . Prophylactic topical corticosteroid eye drops have been used in studies with other ADCs to mitigate corneal events 11 . However, results have been mixed, and therefore no clear benefit has been demonstrated with this strategy to date.
Multidisciplinary approach to manage corneal events
In light of the known corneal events associated with single-agent belamaf for patients with RRMM, effective management requires close collaboration and clear communication between hematologist/oncologists, the wider RRMM patient care team (e.g., oncology nurses, nurse practitioners, and physician assistants), and eye care professionals to identify and manage these events. Figure 2 summarizes the roles of these professionals within a multidisciplinary approach to managing corneal events.
The roles of the hematologist/oncologist and the RRMM patient care team
It is important that the RRMM care team managing a patient treated with belamaf is knowledgeable of the potential risks of corneal events. When the hematologist/ oncologist decides to prescribe belamaf for their patient, they should educate themselves, the broader RRMM patient care team, and the patient on the potential risks of corneal events and how to monitor and appropriately manage these events. Educational materials for the RRMM patient care team and the patient are available through the Risk Evaluation and Mitigation Strategy program in the USA and the Risk Management Plan program in the EU 34,35 . • Advise patients to use preservative-free lubricant eye drops at least 4 times a day in both eyes, starting with the first infusion and continuing until end of treatment 22,23 Avoiding use of contact lenses unless clinically warranted. An eye care professional may direct the patient to use bandage contact lenses Contact lenses may irritate the cornea 30 .
Bandage contact lenses help protect and aide in repair of the corneal epithelium 27 • Begin at the first infusion and continue throughout treatment 22,23 • Relevant to both eyes Belamaf belantamab mafodotin, KVA keratopathy and visual acuity.
RRMM Patient Care Team
Hematologist/Oncologist Before starting treatment: Educate themselves and the wider RRMM patient care team and patient on potential risk of corneal events and appropriate monitoring/management strategies from the local label
On treatment:
Review eye care professional examination report and determine appropriate therapeutic strategy based on corneal event grade/ severity guidelines in local label
On treatment:
Share information with the wider RRMM patient care team that may impact belamaf treatment continuation/interruption
Nurses, Nurse Practitioners and Physician Assistants
Before starting treatment: • Refer patient to an eye care professional • Help educate the patient: -Advise the patient not to use contact lenses during belamaf treatment unless directed by eye care professional -Ensure patient is reporting ocular symptoms -Advise patient to exercise caution when driving or operating machinery Fig. 2 Flow chart of multidisciplinary approach to mananging corneal events with belamaf: health care professional roles.
Specifically, the RRMM patient care team should advise the patient to not wear contact lenses during belamaf treatment, unless they are directed to by an eye care professional, as contact lens use can contribute to dry eye and other ocular complications 23, 24,36 . The patient care team should advise patients to exercise caution when driving or operating machinery, since changes in visual acuity may occur 23, 24 . Importantly, the team should discuss with the patient whether additional caregiver support is needed to maintain activities of daily living (ADLs; e.g., transportation to medical appointments) in the event of a transient change in visual acuity.
Given that corneal examination findings were not always accompanied by symptoms, the RRMM patient care team must help ensure adherence to the eye examination requirements at baseline and during treatment to accurately monitor for corneal changes 11,23,24 . The patient care team should also reiterate to the patient the importance of reporting ocular symptoms-in particular, blurred vision, subjective symptoms of dry eye, and changes in visual acuity. They should also routinely ask the patient about the impact of any ocular symptoms on ADLs, such as subjective blurred vision that interferes with reading or leads to difficulty driving 23, 24 . Table 3 provides examples of questions for the team to use to assess the impact of ocular-related symptoms on a patient's ADLs.
The hematologist/oncologist should review the patient's eye examination before dosing and determine the appropriate therapeutic strategy based on the grade of the most severe finding per the KVA scale (Table 1) 23, 24 . Therefore, timely information exchange with eye care professionals is crucial for ensuring seamless treatment continuation, or delay if required to manage corneal events.
The role of the eye care professional
The RRMM patient care team will refer the patient to an eye care professional to perform a baseline eye examination before starting belamaf treatment 23,24 . At the baseline examination, the eye care professional should ask about preexisting ocular conditions of interest, such as a history of glaucoma, cataract, any ocular surgeries (including laser-assisted in situ keratomileusis (LASIK) or refractive surgery) or eye trauma, diabetic retinopathy, and macular degeneration that might affect the BCVA. The eye care professional should then advise the hematologist/oncologist of any ocular history that would necessitate changes to supportive measures. For example, patients with a history of dry eye may be advised to use preservative-free lubricant eye drops more frequently than specified in the belamaf local label.
After follow-up examinations, the eye care professional should provide the hematologist/oncologist with the overall corneal event grade/severity using the KVA scale in their local label (Table 1) 23,24 . This overall score will represent the worst finding on either corneal examination or BCVA change assessment. The hematologist/oncologist will use this overall grade/severity to determine any appropriate dose modification; thus, it is important to clearly communicate these findings in terminology consistent with the local label.
If clinically warranted, the eye care professional may propose additional mitigation strategies for corneal events (e.g., bandage contact lenses or punctal plugs Table 3 Example questions to ask patients to facilitate reporting of new corneal-related AEs with belamaf treatment.
During conversations with patients regarding the effects of their treatment, it may be helpful to ask the following questions regarding new corneal AEs they may be experiencing with belamaf: • Are you finding it difficult to read during the day due to your eyesight? Or at night?
• Have you noticed any problems with your eyesight while driving?
• Do you have any problems with your eyes or vision when using a computer/tablet/phone or watching the television?
• Have you needed to increase the font size on your devices so that you can see the text better?
• Have you noticed any vision changes or other symptoms when you engage in any other activities that are important to you?
• Have you experienced any pain or discomfort in or around your eyes?
• Are your eyes more sensitive than usual to light?
• Have you needed to turn off the lights or wear sunglasses indoors because you were more sensitive to light?
• Have you noticed any other symptoms related to your eyes or eyesight?
AE adverse event.
(to block tear duct drainage)). Eye care professionals should reiterate to patients the need to adhere to supportive care measures (e.g., preservative-free lubricant eye drops) and to exercise caution when driving or operating machinery 23,24 .
Discussion
Belamaf is a first-in-class, anti-BCMA therapy that demonstrated deep and durable clinical responses as a single agent in patients with heavily pretreated RRMM in DREAMM-2 7 . Keratopathy (MECs), an eye examination finding with or without ocular symptoms, was the most common AE. The risk of keratopathy (MECs) does not appear to decrease over time and these events will recur with repeated dosing, so it is imperative to gain a better understanding of how to optimize corneal event management. Overall, corneal events were manageable with dose modification and supportive care (e.g., preservativefree lubricant eye drops) while patients remained on treatment 11 . These recommendations are also supported by studies with other MMAF-ADCs, which found that treatment-associated corneal changes improved or resolved upon dose modification (reduction and/or delay) or treatment discontinuation 11,[37][38][39][40][41] .
Balancing efficacy against the risk of AEs in RRMM management depends on individual considerations for each patient 42 . The RRMM patient care team should be familiar with the potential corneal event risks with belamaf treatment. Referral to an eye care professional is required for regular eye examinations including before treatment initiation and subsequent treatment cycles 23, 24 . Depending on the severity of corneal events, management may require belamaf dose reduction and/or delay 23,24 . When possible, it is important for patients to continue treatment in order to maximize survival outcomes. For example, early treatment discontinuation to manage AEs related to immunomodulatory agents may lead to poorer clinical outcomes 42 . Our guidelines have been written to support resumption of treatment following recovery or improvement of corneal events. Going forward, continuing data collection, experience, and insights from hematologist/oncologists and eye care professionals will help to inform these current event management guidelines.
Several avenues of research are ongoing to determine the etiology and potential mitigation strategies of these events. The pathophysiology of the keratopathy (MECs) observed with belamaf and other ADCs is currently unknown 11 . We recently proposed a mechanism whereby keratopathy (MECs) represents an off-target effect of belamaf-induced apoptosis of corneal epithelial cells. Apoptotic corneal epithelial cells are eventually replaced with new epithelial cells, ultimately leading to the resolution of keratopathy (MECs) and symptoms after completion of treatment. This is supported by the apparent migration of the MECs, in some patients, and the evolution of the MECs' appearance from large clear-spheres to tiny ill-defined flecks. Additional research will help to validate or revise this hypothesis. Furthermore, there is preclinical evidence that belamaf may enter the cornea through the tear film, and exposure-response analyses suggest that belamaf trough concentration may correlate with the probability and timing of corneal events 11,43 . Modeling studies are underway to evaluate the time course of corneal events linked with belamaf pharmacokinetics to determine whether alternative dosing schedules can mitigate corneal exposure. Additional corneal event management strategies are also being investigated.
Knowledge of potential AEs associated with particular therapies may help promote early and effective interventions to prevent and reduce the impact of these events on patients' quality of life. Close collaboration between hematologist/oncologists and eye care professionals is needed to determine appropriate dose modifications based on the severity of corneal events. These collaborations can also help ensure that patients have easy access to an eye care professional, the eye care professional understands what examinations and assessments are needed, and the hematologist/oncologist receives accurate reports to help appropriately treat their patients. Continued collaboration and effective communication between hematologist/oncologists and eye care professionals are crucial for the effective management of corneal events to ensure that patients receive optimal treatment.
Belamaf is now being investigated to treat RRMM in combination with other anti-tumor treatments, including novel therapies with complementary mechanisms of action. Several of these ongoing studies include a dose exploration phase that will evaluate the belamaf safety and tolerability profile in combination with other anti-tumor treatments. These include the Phase I/II DREAMM-5 platform trial 44 , and the Phase III DREAMM-8 study (NCT04484623) 45 . Other studies such as DREAMM-6 (NCT03544281) 46 , a Phase I/II, open-label, dose escalation and expansion study to evaluate safety, tolerability, and clinical activity of belamaf in combination with approved regimens and DREAMM-4 (NCT03848845) 47 , a Phase I/II, single arm, open-label, two-part study investigating the safety, tolerability and clinical activity of belamaf in combination with a programmed cell death-1 inhibitor pembrolizumab, are also underway. Other ongoing clinical trials include the Phase I/II (NCT03715478), 2-part multi-center, dose escalation study evaluating the maximum tolerated dose, and recommended Phase II dose, safety, tolerability, and efficacy of belamaf in combination with pomalidomide and dexamethasone 48 . These studies should provide further insights into the optimization of belamaf dosing, as well as the effectiveness of mitigation strategies (such as temporary treatment delays), to potentially reduce the frequency, severity, and/or overall impact of corneal events in patients who could benefit from belamaf treatment. | 2021-05-28T06:16:56.572Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "086cb7437c2fe1e7d02f53bb7b9c1e8634e582b9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41408-021-00494-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4f4f4de70a04863df679c287b7848ce907cb1b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262197122 | pes2o/s2orc | v3-fos-license | Paediatric Restrictive Cardiomyopathy - Diagnosis and Challenges
Restrictive cardiomyopathy is one of the rarest forms of cardiomyopathies in paediatric patients characterised by impaired myocardial relaxation or compliance with restricted ventricular filling, leading to a reduced diastolic volume with a preserved systolic function. We report 2 cases—a 5-year-old boy who presented with abdominal distension and palpitation with family history of similar complaints but no definite genetic diagnosis as yet and a 5-year-old girl who presented with chronic cough and shortness of breath. Both cases were diagnosed in a tertiary care hospital in Muscat, Oman, in 2019 and are managed supportively with regular outpatient follow-up. This is the first series of reported cases of paediatric restrictive cardiomyopathy from Oman.
R
estrictive cardiomyopathy (RCM) is one of the rarest forms of cardiomyopathies in paediatric patients, with an overall prevalence of 2-5% of all types of cardiomyopathies. 1 Functionally, RCM is characterised by impaired myocardial relaxation or compliance with restrictive filling, leading to a reduced diastolic volume with a preserved systolic function. 2This leads to atrial dilatation, which is represented by the large P-waves on an electrocardiogram (ECG) and bundle branch block. 1 Findings in an ECG illustrate atrial dilatation and small ventricles, with an element of atrioventricular regurgitation that worsens the atrial enlargement. 1 The causes of RCM are divided into primary and secondary, which are subdivided into familial/sporadic causes and systemic disorders, respectively. 3Secondary causes are usually seen in adults and include systemic infiltrative diseases like amyloidosis, Gaucher's disease and storage disorders such as Fabry's disease and others including scleroderma, endomyocardial fibrosis and carcinoid syndrome. 3RCM can be confused with constrictive pericarditis and is challenging to differentiate clinically or with imaging; however, it is important to differentiate them as constrictive pericarditis can be treated surgically, whereas RCM has a high mortality rate. 1 The reported complications of RCM include heart failure and arrhythmias compared to the rarer complications including thromboembolism and increased pulmonary vascular resistance. 4e report 2 patients who presented at a tertiary care hospital in Muscat, Oman, with RCM at an early age of 5 years-one male with abdominal distension and palpitations and one female with chronic cough and shortness of breath.
The abdomen was distended on inspection with visible abdominal veins.There was a non-tender hepatomegaly of 7 cm below the right costal margin and no splenomegaly.The rest of the examination was normal.As a result, he was first checked by the gastroenterology team and was referred to cardiology once gastro-intestinal causes were ruled out.
Laboratory findings are illustrated in Table 1.Abdominal ultrasound revealed enlarged echogenic coarse liver suggestive of liver parenchymal disease and congestive hepatomegaly.Hepatic veins and intrahepatic inferior vena cava were dilated.A chest X-ray was done on admission and was unremarkable.
ECG findings are shown in Table 2.The echocardiography showed severely dilated right and left atrium with severe tricuspid valve regurgitation along with a trivial mitral regurgitation.It also illustrated mildly reduced systolic function with ejection fraction of 50% and the left ventricle showed apical trabeculations with features suggestive of noncompaction.There was no pericardial effusion.The detailed findings of the echocardiography are shown in Table 3 and Figure 1.The 24-hour Holter ECG was normal.
He was started on furosemide, spironolactone and digoxin.Currently, he is followed-up with cardiology every 6-8 months and has had 2 admissions since diagnosis for chest infections.Consent for publication was obtained from the patient's guardian.A 5-year-old previously healthy girl presented with a 4-day history of cough and poor oral intake.There was no history of fever, no shortness of breath and no exposure to sick contacts.She had a history of night sweats and palpitations that were aggravated by change of posture.There was no history of chest pain, cyanosis or syncope.She had a similar episode 1 month prior and was treated symptomatically elsewhere.Her father reported a history of easy fatigability with running as compared to his other children and also poor appetite and poor weight gain for the past 2 years.There is history of inability to sleep lying flat and needing head elevation for the last few months.She had cataract surgery at the age of 3 years.She is the eldest child of a first-degree consanguineous parents.One of her paternal cousins had a cardiac defect that needed catheterisation, but no details were available.She also had a maternal aunt who developed valvular heart disease at the age of 12 years and required valve replacement.She has 3 other younger siblings who are doing well.There is no history of other cardiac disease in the family.
On physical assessment, the child was in respiratory distress with mild recessions and tachypnea up to 30 breaths per minute.Her weight was 14.75 kg (10 th percentile) and her height was 114.5 cm (10 th percentile).She had periorbital edema with hypertelorism and clubbing.A chest examination revealed bilateral basal scattered crepitations.Cardiac examination revealed normal heart sounds, with gallop and a pansystolic murmur grade II/VI best heard at the apex.Abdominal examination revealed distension and tender hepatomegaly of 8-9 cm below the right costal margin.
Laboratory findings are depicted in Table 1.A chest X-ray showed cardiomegaly with congested lungs and right para-cardiac haziness.ECG and echocardiography findings and diagram are shown in Tables 2 & 3 and Figure 1, respectively.Her initial working diagnosis was multisystem inflammatory syndrome in children (MISC) causing acute heart failure.She also had a full septic work-up-to rule out pneumonia, pleural effusion and myocarditis-and was initially started on furosemide and spironolactone.During her further admissions, digoxin, captopril and aspirin were added gradually.The genetic and metabolic teams were involved to exclude secondary causes.At the last follow-up, she was intermittently in atrial flutter-fibrillation needing higher doses of digoxin for rate control.Consent for publication was obtained from the patient's guardian.
Discussion
RCM is one of the common causes of adult diastolic heart failure, which could be explained by the different risk factors affecting this age group. 5In contrast, these risk factors are absent in the paediatric age group, making this type of cardiomyopathy a rare occurrence in children with incidence of 0.04 per 100,000 in the USA. 6It is mostly diagnosed between the ages of 6-10 years, corresponding with the current cases, where both patients were 5 years of age. 5 Both current cases were idiopathic.
As per the American Heart Association (AHA), the most common mode of inheritance is autosomal dominant. 7In case 1, the 5-year-old boy also had a strong family history of cardiac diseases and sudden deaths suggesting autosomal dominant inheritance, though no genetic diagnosis was confirmed until the write-up of this report.
RCM presents with a wide variety of symptoms, making the diagnosis even more difficult.In different case studies, a 10-year-old boy collapsed as he was playing football and was found to have a large liver and high B-type natriuretic peptide, along with abnormal echocardiography findings, consistent with RCM. 5 Both the current patients presented with hepatomegaly, along with cough and fever.Similar to the current cases, the patient in Denfield's study also had recurrent respiratory illnesses. 5These presentations are mostly due to the high filling pressures that cause pulmonary edema, pulmonary hypertension, hepatomegaly and peripheral edema. 5On the other hand, a case reported from Saudi Arabia reported an 11-year-old girl who presented with lower limb swelling and paraesthesia with no chest pain or shortness of breath, diagnosed with thromboembolism and RCM and treated with cardiac transplant. 8Therefore, the non-specific signs and symptoms of RCM may lead to an initial diagnosis of different respiratory or alimentary illnesses and the cardiac diagnosis may be missed or delayed as in the current 2 cases.
The first case was admitted under general paediatrics and the initial assessment was performed by the gastroenterology team.Once the gastroenterology causes were ruled out, the patient was referred to the cardiology team.The main cause of delay in the diagnosis was the presentation with abdominal distension with hepatomegaly, hence causes such as liver diseases and malignancies were ruled out before looking for other causes.
The second case had chronic non-specific cough for the last 2-3 months.She was seen elsewhere by general paediatrician and was treated for acute chest infection versus asthma.The chest X-ray done outside showed borderline cardiomegaly which was missed as were the important details in history, such as worsening inability to lie flat and easy fatigability.
Both cases presented with non-cardiac symptoms which led to a delay in the diagnosis.These cases underscore the importance of a good history and physical examination and the need to approach patients with chronic complaints with a wider frame of mind.
The diagnosis of RCM can be done by utilising the ECG, echocardiography and cardiac MRI, if needed.The main finding as per the AHA, is the biatrial enlargement on echocardiography and surface ECG with preserved systolic function. 7ommenting on the diastolic function in the paediatric age group may be difficult due to the variability of presentation or the need for sedation. 8hocardiography can also differentiate between RCM and constrictive pericarditis (CP), which changes the management completely. 9In CP, the chamber compliance is reduced due to external pressure, causing an increased interventricular dependence and irregularity between intracardiac and intrathoracic pressure during respiration as shown by doppler echocardiography along with septal shifting. 10More specifically, annular tissue doppler can further distinguish the two entities.In RCM, the early diastolic velocity of mitral annulus is reduced, whereas it is normal or increased in CP. 10 Cardiac catheterisation shows similar features in both diseases including early rapid diastolic filling with elevated end-diastolic pressures.The main finding to differentiate CP from RCM is the respiratory variation in pressures. 10Biopsy of the endocardium in children with RCM is not specific and is not helpful in making the diagnosis. 9he prognosis of RCM is generally poor, with a survival rate of approximately 2 years from the day of diagnosis. 11Management of RCM is mainly symptomatic, involving diuretics in pulmonary congestion, pacemakers in arrhythmia, and anticoagulants in a thromboembolic event. 12The use of diuretics should be carefully assessed as they are preload dependent and should not be dried. 12There is no proven role for digoxin and beta-blockers; however, it might be helpful with tachyarrhythmia or heart rate control. 5,13The definite therapy is a cardiac transplant which showed the 10-year survival rate post-transplant to be similar to other types of cardiomyopathies. 12he outcome of transplant has improved with a median graft half-life of 12 years. 11The question of when to send such cases for heart transplantation is still controversial.However, since medical therapy is just symptomatic many centres post these cases for transplant immediately after diagnosis. 11The final decision lies with the institute itself and their own criteria. 13A case series from Spain reported 9 cases of RCM, of which 5 underwent cardiac transplant with at least 4-years survival post-transplant. 3The need for heart transplant was not discussed in the current cases due to the non-availability of this management option in Oman.
Conclusion
RCM in children is a rare entity with no cases reported in Oman till date.Proper symptomatic management is essential in children with RCM and, most importantly, a timely heart transplant to prevent sudden cardiac death as well as irreversible pulmonary hypertension.In Oman, there is a need for a national programme for heart transplant to help children with such diseases.
Figure 1 :
Figure 1: Echocardiography of (A) case 1 and (B) case 2 that presented to a tertiary care hospital in Muscat, Oman in 2019.
Table 1 :
Laboratory findings of two cases that presented to a tertiary care hospital in Muscat, Oman in 2019.
Table 2 :
Electrocardiogram changes of the two cases that presented to a tertiary care hospital in Muscat, Oman in 2019 RVH = right ventricular hypertrophy; LVH = left ventricular hypertrophy;QTc = corrected *QT interval for heart rate.
Table 3 :
Echocardiography findings of the two cases that presented to a tertiary care hospital in Muscat, Oman in 2019 | 2023-09-24T15:56:50.998Z | 2023-09-20T00:00:00.000 | {
"year": 2024,
"sha1": "c89cc2d816d5298b2873aaeb405d8050ba123885",
"oa_license": "CCBYND",
"oa_url": "https://journals.squ.edu.om/index.php/squmj/article/download/6003/3772",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c36e528df073c65f0b6bd96aeecf90ac9bcb646",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237758720 | pes2o/s2orc | v3-fos-license | Internal Assessment of the Orthodontics Department of Kermanshah University of Medical Sciences, Iran
<jats:p />
Internal assessment is an inherent element of every educational system.Assessment of educational programs could determine the extent to which educational goals are achieved, and the shortcomings of the educational programs could also be detected and corrected as such.As a result, the quality of educational programs could be improved profoundly (1,2).
The School of Dentistry of Kermanshah University of Medical Sciences (KUMS), in Iran started its activity less than a decade ago.The faculty members of the School of Dentistry are relatively young and may have shortcomings due to inexperience.To the best of our knowledge, a comprehensive assessment of the performance of the faculty members and students of this university has not been conducted so far.Considering the significance of the information on this subject to improve the quality and quantity of the output of the university, the present study aimed to perform a comprehensive assessment of the performance of the Orthodontics Department of the School of Dentistry of KUMS, including the head of the department, faculty members, technicians, students, and graduates, in 2018.
Initially, the faculty members were informed on the study and the significance of internal assessment and its process.Since internal assessment should be based on the set targets of the department, an internal assessment committee was first recruited to determine and write down the goals and notions of the department.For this purpose, the general goals of the educational programs, tasks of the department, and the major and minor goals were specified.At the next stage, the internal assessment factors, criteria, and indicators were determined.Finally, the designed questionnaires were approved by the head of the department and the assessment committee, and their op-timal validity was confirmed based on eight parameters, which were evaluated based on 153 criteria and 453 indicators.Each indicator was scored based on a three-point Likert scale as unfavorable (score zero), relatively favorable (score one), and favorable (score two).
The questionnaires were distributed among the participants.They were initially provided with some information regarding the questions and the reasons for this evaluation and were assured of the confidentiality of their information.A checklist was used to assess educational equipment, facilities, and human resources.Table 1 shows the obtained results regarding the favorability level of each factor, as well as its criteria and indicators.
Based on the findings, the following suggestions are proposed to help eliminate the weaknesses and further improve the strengths of this department: 1-Goals and notions should be revised, and recruiting post-graduate students should be defined as a goal of this department.
2-Proper programs should be designed to improve the activities and cooperation of students and faculty members in the department.
3-Considering the relatively young faculty members of the department, it seems that the scientific and academic ranking of the faculty members would improve over time.
4-The cooperation of the faculty members in the research projects of other organizations should be promoted.
5-Facilities should match the number of the admitted students.
6-It is recommended that the faculty members participate in courses of novel educational techniques to enhance the quality of education.
7-Communication with the graduates should be im- | 2021-09-01T15:04:03.566Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "6ad34988d52c8bb9501365c774da10b491179962",
"oa_license": "CCBYNC",
"oa_url": "https://brieflands.com/articles/erms-107978.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "091e36648f8210cdc0bb6cd1b652dfc9a0317d36",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
124245441 | pes2o/s2orc | v3-fos-license | Gedanken Experiment ( Thought Experiment ) about Gravo-Electric and Gravo-Magnetic Fields , and the Link to Gravitons and Gravitational Waves in the Early Universe
Our Gedanken experiment is a thought experiment as to what is called gravo-electric and gravomagnetic potentials linked to gravo-electric and gravo-magnetic fields. We examine what Padmanabhan presented in an exercise as of a linkage of electromagnetic fields with Gravitation. The modifications we bring up take the nonrelativistic approximation as the beginning of an order of magnitude estimate as to gravitons, generated electromagnetic fields, and are by definition linked to the total angular momentum of an initial configuration of “particles” of space-time import. The innovation put into Padmanbhan’/s calculation is to for total mass M, used, to substitute in M ~ N(gravitons) times m(g), where m(g) is about 10^−62 grams, as well as specify distances, for the object spinning as being about Planck length in size, give or take a few orders of magnitude. The results are by definition very crude, and do not take into account relativistic effects, but are probably within an order of magnitude important comparison. We conclude with an a comment as to the possibility of an additional polarization as due to a response function of an interferometer to “scalar” polarization as maybe indicate a scalar-tensor gravitational theory as a replacement for General Relativity.
More General Energy Expression Given below
Our tack is to take what is given by the energy expression from [1], with an minimum energy given as, if φ is an inflaton, and if ( ) a t is the square of the scale factor, and tt g δ a non-dimensional perturbation of the "time" factor in a geodestic, that then by [1] [2] ( ) And then what we do is to take the work from [3] to come up with a gravo-electromagnetic frequency which is then set as equal to give an initial import of energy according to a frequency ( ) This frequency, as isolated in Equation ( 2) will be compared to the frequency generated by the gravo-electric and gravo-magnetic fields given in the following argument below.It will suggest something about the inflaton, as suggested in the last part of this document.
We will take the square of the frequency given in the 2nd line of Equation ( 2) and compare it to gravo-electric and gravo-magnetic generated frequency value, as our first principle linkage of electromagnetic waves with gravitons.We should keep in mind that the volume, which is for a complete cosmology is small.So, let us now come up with a gravo-electric & gravo-magnetic counterpart to Equation (2) above.To do that, we take an argument given in [3] in pages 278 -279 of the form , from exercise 6.15 of a magnetic and electric field being generated by a source with arbitrary density, as to have the following two lowest order perturbations, as given in the exercise to be, if M is total mass, J αβ the angular momentum tensor with also Then, we shall reference the potential's given below.Note, that we are referring to a volume, V, which will be for the entire spatial domain of the universe but that our V volume is in early universe conditions of Planckian size dimensions.The author thanks the referee for the necessity of making this point obvious as how to properly interpret Equation (1) given above.Here, G is the usual gravitational "constant", c the speed of light, and what is called x is the spatial dimensions.J αβ as the angular momentum tensor should be thought of in terms of, in this case, as having a relationship to electromagnetics, as given in [4] below whereas we are taking Equation (1) as formulated from equations from pages 278-279 of the text as given in reference [3] Then again by [3], and exercise 6.15, page 279, the following gravo-electric and gravo-magnetic fields appear ( ) Here, x is a unit vector in the radial direction, and S an angular velocity, which will then lead to the fol- lowing angular frequency, that by [3], and exercise 6.15, page 279 From here, we will proceed to modify M, and S by gravitational physics.
Modify M, and S by Gravitational Physics
What we are going to do, is the restrict M to the case of heavy gravity in the Planckian regime, call G the usual gravitational physics variable, and define S, as following for M and S. N being the number of initial gravitons, and a radii as Planck length 10 ℵ × , with ℵ > 0, so up to a good approximation Then the maximum initial value of the angular frequency of Equation ( 5) is ( )
Modify M, and S by Gravitational Physics with Numerical Inputs into Equation (7) for Frequency
37 gravitons 10 N ≈ , due to and a rest massive graviton mass of about 10^−62 grams, due to [5] plus a radial distance r from the source of the graviton production would lead to relic gravitational waves reduced dramatically from the beginning radii presumably about 1 meter, to the present radii of the universe, presumably of the value of about Then from [4] [6] we have for gravitons an energy value of about, if m is the mass of a "massive" graviton, using in this case the relativistic formula as given in [2] [6], to approximate to first order ( ) Compare this value of energy, by making the following scaling, namely equate Equation (5) and Equation (9) such that ( ) Which is then compared with and is implying a frequency squared value of [2] ( ) To get to the present value of what the relic wavelengthfor produced gravitons initially would be, take in the upper value of the λ given in Equation ( 11 Whereas if r in Equation ( 10) is significantly less than 1 meter, emergent radiation would be ( ) Based upon Equation ( 12) and Equation ( 13) This is using extremely rough estimates.
Considerations as to Bicep 2, the Matter of Scalar-Tensor Polarizations as an Alternative to General Relativity and Alternate Gravitational Theories and Experimental Tests of General Relativity via Inteferometric Methods
From [7] we have the following to consider, namely trying to determine restraints upon the nature of gravity, i.e. is it consistent with General relativity or do we have an alternative situation as given in the following quote.We hope that getting a consistent model of inflaton physics will help clarify the following alternatives: Quote This fact rules out the possibility of treating gravitation like other quantum theories, and precludes the unification of gravity with other interactions.At the present time, it is not possible to realize a consistent Quantum Gravity Theory which leads to the unification of gravitation with the other forces.On the other hand, one can define Extended Theories of Gravity those semiclassical theories where the Lagrangian is modified, in respect to the standard Einstein-Hilbert gravitational Lagrangian, adding high-order terms in the curvature invariants (terms like R2, etc…) or terms with scalar fields non minimally coupled to geometry (terms like φ2R) End of quote We claim that the strength of the inflaton term , as we will give in Equation ( 15) may allow us to determine if we have to use a semi classical set of terms which add more terms to the space curvature of early universe Planckian physics space time geometry.We also though have to temper this quest in requiring that the following holds as given in [8], namely: Quote Recent data from Planck matches well with the minimal ΛCDM model.A likelihood analysis using Planck, WMAP and a selection of high resolution experiments (highL), tensor to scalar ratio r 0.002 is found to be <0.11when dns/dlnk = 0.
End of quote Our inflaton, which is given in Equation ( 15) must be made consistent with the requirements of a low scalar to tensor ratio, and this requires exquisite fine tuning of inputs into the inflaton, which also should be made consistent with answers to [7]- [9].
We find that the resulting inflaton measurement which is the conclusion of our document, with the following, namely assuming that ( ) , and which may involve [10] directly.I.e. is our inflaton consistent with just two standard polarizations, or is there a third polarization necessary so that the following inflaton forms?If there is not a response function of an interferometer to an additional "scalar" polarization we define, we stick to GR, whereas though if Equation (15) necessitates an additional polarization, we are looking at a scalar-tensor gravitational theory.Needless to say we will require careful analysis of ( ) This enormous value for the inflaton, initially, needs to be examined further.As given in references [1] [2] as to further prospects.It further should be linked to Corda's pioneering work with "gravity's breath", i.e. traces of the inflaton as given by [10] [11] and is the justification of Equation ( 15) above.We can use this to determine what to make of the stochastic background of pre space time physics.
Avoiding the Bicep 2 Mistake: What We Can Do with Equation (15)?
Following [9] what we are doing is examining the stochastic regime of space-time where the following holds: Quote Omni-directional gravitational wave background radiation can arise from fundamental processes in the early Universe, or from the superposition of a large number of signals with a point-like origin.Examples of the former include parametric amplification of gravitational vacuum fluctuations during the inflationary era, termination of inflation through axion decay or resonant preheating, Pre-Big Bang models inspired by string theory, and phase transitions in the early Universe; the observation of a primordial background will give access to energy scales of 10 to the 9 power, up to 10 to the 10 power GeV, well beyond the reach of particle accelerators on Earth.
End of quote Needless to say though, we need above all to avoid getting many multiple stochastic signals, in what we process for primordial gravitational waves, and to use, instead tests to avoid getting dust signals which are what doomed Bicep 2, i.e. as is made very clear in [12] [13].In all, what we are doing is consistent with the requirements given in the Authors other article, [14], as is given in the following quote: Quote The main agenda will be in utilization of Equation ( 27) to help nail down a range of admissible frequencies which will be to avoid [11]- [14] conflating the frequencies of collected gravitational wave signals from relic cosmological conditions (or would be signals) with those connected with dust generated gravitational wave signals, especially from dust conflated with Galaxy formation in the early universe.More than anything else, we need to find likely narrow (?) Frequency ranges, which will be commensurate with Equation (27), and to use advanced detector technology.Of course such a search will be hard.But it also will be a way, with due diligence as to answer questions raised by the Author in [14].In doing so, the relative flatness of the early universe and its departure from curved space conditions will be a great way to answer the suppositions raised in [9] [10] as well.
End of quote Understanding inflaton physics properly will also give credence to considerations given in [1] below as to the degree of flatness or lack of, in the early Universe. | 2018-12-27T04:22:47.756Z | 2016-04-06T00:00:00.000 | {
"year": 2016,
"sha1": "a6c2c7689ead2e816a722916d563438ef14cba2d",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=65942",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a6c2c7689ead2e816a722916d563438ef14cba2d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
83572201 | pes2o/s2orc | v3-fos-license | EVALUATION OF ANTICANCER ACTIVITY OF PLUMBAGO ZEYLANICA LINN. LEAF EXTRACT
Cancer is a malignant disease that is characterized by rapid and uncontrolled formation of abnormal cells which may mass together to form a growth or tumour, or proliferate throughout the body. Next to heart disease cancer is a major killer of mankind. Present study aims at a preliminary phytochemical screening and anticancer evaluation of plumbago zeylanica Linn. against Ehrlich Ascites Carcinoma in animal model. Results indicates that ethanolic extract of plumbago zeylanica Linn. possess significant anticancer activity and also reduce elevated level of lipid peroxidation due to higher content of terpenoids and flavonoids. Thus ethanolic extract of plumbago zeylanica Linn. could have vast therapeutic application against cancer.
INTRODUCTION
The chemotherapy of neoplastic disease has become increasingly important in recent years. An indication of this importance is establishment of a medical specialty in oncology in which the physician practices various protocol of adjuvant therapy. Most cancer patient now receives some form of chemotherapy, even though it is merely palliative in many cases. The relatively high toxicity of most anticancer drugs has fostered the development of supplementary drugs that may alleviate these toxic effects or stimulate the regrowth of depleted normal cells 1 . Plants have a long history of use in the treatment of cancer. Plants have played an important role as a source of effective anticancer agent, and it is significant that over 60% of currently used anticancer agents are derived in one way or another from natural sources, including plants, marine organism and microorganisms.
Plants have been prime source of highly effective conventional drugs for the treatment of many forms of cancer, and while the actual compounds isolated from the plant frequently may not serve as the drugs, they provides lead for the development of potential novel agents 2 . Therefore it was thought worth while to carry out preliminary phytochemical screening and screening of Plumbago zeylanica Linn. for anticancer activity against Ehrlich Ascites Carcinoma in animal model.
Extraction Procedure
The leaves of Plumbago zeylanica Linn. were dried under shade and then made in to a coarse powder with a mechanical grinder. The powder was passed through sieve no. 40 and stored in an airtight container for further use. The dried powder material of leaves and stem (150gm) was first extracted with petroleum ether (60-80 o ) in a soxhlet apparatus and after complete extraction (24 hrs) ,the solvent was removed by distillation under reduced pressure and resulting semisolid mass was vacuum dried using vacuum evaporator to yield a solid residue (petroleum ether extract). After the extraction with petroleum ether the same plant material was dried and again extracted with ethanol (95 % v/v) in soxhlet apparatus and after complete extraction (72 hr) the solvent was removed by distillation under reduced pressure and resulting semisolid mass was vacuum dried using vacuum evaporator to yield a solid residue (ethanolic extract) 3,4 .
Phytochemical Tests
Various chemical tests were performed for the phytochemical identification of the ether and ethanolic extract of the plant leaves Plumbago zeylanica Linn. as per standard procedure 5
Anticancer activity
Toxicity Evaluation (LD 50 ) (Karber's methods) Thirty mice including both male and female weighing 20-25 gm were selected for the study. LD 50 was measured by Karber's methods 7 .
Animals
Male Swiss albino mice weighing between 18-25 gm were used for present study. They were maintained under standard environmental conditions and were fed with standard pellet diet of water and ad libitum. The mice were acclimatized and laboratory condition for 10 days before commencement of experiment. All procedure described were reviewed and approved by the Institutional Animal Ethical Committee of J.K.K. Nataraja College of Pharmacy, Komarapalayam.
Cancer Cell line
EAC cells were obtained from Amala Cancer Research Center, Thrissur, and Kerala, India. They were maintained by weekly intraperitoneal inoculation of 10 6 cells / mouse.
Preparation of extract drug and mode of administration
In the present anticancer study, ethanolic extract of plumbago zeylanica (EEPZ) in the dose of 100 mg/kg and 200 mg/kg were prepared as suspension by dissolving the ethanolic extract in propylene glycol and sterile physiological saline containing Tween 20 to get the desired concentration 8,9 .
Tumor Transplantation
Ehrlich's Ascites Carcinoma was maintained by serial transplantation from tumor bearing Swiss Albino mice. Ascetic fluid was drawn out from tumor bearing mice at the log phase (day 78 of tumor bearing) of the tumor cells. The tumor cell number was adjusted to 2X10 6 10 .
Tumor Cell Volume and Packed Cell Volume
The mice were dissected to collect ascitic fluid from peritoneal cavity and centrifuged to determine packed cell volume at 1000 rpm for 5 min 11 . The transplantable murrane tumor was carefully collected to measured the tumor volume.
Viable and non viable cell count
Viable and non viable cell counting of the ascetic cell was done by staining with tryphan blue (0.4 % in normal saline), dye exclusion test and count was determined in a neubauer counting chamber. The cells that did not take up the dye were viable and those that took the stain were not viable 10 .
Mean survival time and percent increased in life span
The effect of EEPZ on tumor growth was observed by MST and % ILS. MST of each group continuing 4 mice were monitored by recording the mortality daily for 6 weeks and % ILS was calculated by using following equation 10
Effect of EEPZ on hematological parameters
Blood was collected from each mice by intracardial puncture with blood anticoagulant (Heparin) and while blood cells (WBC), red blood cells (RBC); hemoglobin and differential count were determined in group comprise of I) Tumor bearing mice ( 12 .
Biochemical Assay
After the collection of blood samples the mice were sacrificed and their liver was excised. The isolated liver was rinsed in ice cold normal saline fallowed by cold phosphate buffer having pH 7.4, and blotted dry and weighed. A 10% w/v homogenate of liver was prepared in ice cold phosphate buffer (pH 7.4) and a portion were utilized for estimation of lipid peroxidation and other portion of the same after precipitation of proteins with TCA was used for estimation of glutathione remaining homogenate were centrifuged at 1500 rpm at 4 o C for 15 min. The supernatant thus obtained was used for the estimation of superoxide dismutase, catalase and protein content 13 .
Statistical Analysis
The experimental result were expressed as mean + SEM. Data were assessed by the student t-test P<0.05 was considered as statistically significant.
Toxicity Evaluation (LD 50 )
In acute toxicity study, the given extract of Plumbago zeylanica did not show any mortality up to the dose of 2000 mg / kg. The extract shows sedation, hypnosis, mild muscle relaxant property. EEPZ at the dose of 100 and 200 mg/kg the haemoglobin content in EAC bearing mice were increased to 10.6±0.057and 11.45±0.057.The haemoglobin contents in the EAC control mice (9.8± 0.02) was significantly decreased as compare to normal mice (12.85± 0.25). (Table-7)The total WBC count was significantly higher in the EAC treated mice when compared with normal mice. Whereas EEPZ treated mice significantly reduced the WBC count as compared to that of control mice. Significant changes observed on differential count when extract treated mice compared with EAC control mice. (Table 4).
Biochemical assay
Biochemical assay indicated that EEPZ significantly reduced the elevated levels of lipid peroxidation and thereby it may act as an antitumor agent. The level of lipid peroxidation, catalase and protein content were summarized in table 8 and graphical representation shown. (Table 5)
DISCUSSION
The plant leaves and stem of Plumbago zeylanica Linn. were found to contain www.ijbr.ssjournals.com IJBR 1 [2] [2010]01-09 www.ijbr.in 5 higher amount of Triterpenoids. In present study anticancer potential of the plant was estimated in EAC bearing carcinoma Cell. Ethanolic extract of Plumbago zeylanica Linn. had considerably reduced tumour volume and increased the life span of the test animals. It is also found that EEPZ significantly reduced the elevated levels of lipid peroxidation and thereby it may act as an antitumour agent.
CONCLUSION
The ethanolic extract of Plumbago zeylanica Linn. possessed significant anticancer and antioxidant activity due to its higher terpenoids and flavonoids content. Further investigation on different biological activities of this plant with different modes will not only validate the types of activities claimed by ayurvedic, siddha and traditional practitioners, but also will bring out innovation in the field of therapeutics. Values are mean ±SEM, (n =4), EAC control group compared with normal group, Experimental group compared with EAC control. P < 0.01, *P < 0.05 | 2019-03-20T13:13:18.538Z | 2011-10-11T00:00:00.000 | {
"year": 2011,
"sha1": "cbd2222584672ffc864980549214f257296cfd2b",
"oa_license": "CCBY",
"oa_url": "http://ssjournals.com/index.php/ijbr/article/download/609/605",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d6d1ebbd5f402d5c41fb85adbecf4193c4a1c366",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
118317265 | pes2o/s2orc | v3-fos-license | A noncommutative De Finetti theorem for boolean independence
We introduce a family of quantum semigroups and their natural coactions on noncommutative polynomials. We present three invariance conditions, associated with these coactions, for the joint distribution of sequences of selfadjoint noncommutative random variables. For one of the invariance conditions, we prove that the joint distribution of an infinite sequence of noncommutative random variables satisfies it is equivalent to the fact that the sequence of the random variables are identically distributed and boolean independent with respect to the conditional expectation onto its tail algebra. This is a boolean analogue of de Finetti theorem on exchangeable sequences. In the end of the paper, we will discuss the other two invariance conditions which lead to some trivial results.
Introduction
In the classical probability theory, de Finetti's theorem states that an infinite sequence of random variables, whose joint distribution is invariant under all finite permutations, is conditionally independent and identically distributed. One can see e.g. [9] for an exposition on the classical de Finetti theorem. Recently, in [12], Köstler and Speicher discovered that the de Finetti theorem has a natural free analogue if we strengthen "exchangeability" to invariance under a coaction of the free quantum permutations. Here quantum permutations refer to Wang's quantum groups A s (n) in [24]. Their work starts a systematic study of the probabilistic symmetries on noncommutative probability theory. Most of the further project was carried by Banica, Curran and Speicher, see [1], [5], [6]. They studied de Finetti type theorems in both of the classical(commutative) probability theory and the noncommutative probability theory under the invariance conditions of easy groups and easy quantum groups respectively.
In the noncommutative realm, besides the freeness and the classical independence, there are many other interesting independence relations, e.g. monotone independence [13], boolean independence [18], type B independence [3] and more recently two-face freeness for pairs of random variables [21]. All these independence relations are associated with certain products on probability spaces. Among these products, in [16], Speicher showed that there are only two universal products on the unital noncommutative probability spaces, namely tensor product and free product. The corresponding independent relations associated with these two universal products are the classical independence and the free independence. It was also showed in [16] that there is a unique universal product in the non-unital frame which is called boolean product. This non-unital universal product provides another universal independence which is called boolean independence. It would be interesting to find probabilistic symmetries which can characterize boolean independence. The main purpose of this work is to give certain probabilistic symmetries which can characterize conditionally boolean independence in de Finetti theorem's form.
To proceed this work, we will construct a class of quantum semigroups B s (n)'s and its sub quantum semigroups B s (n)'s. Then, we can define a coaction of B s (n) on the set of noncommutative polynomials with n-indeterminants. Unlike B s (n), there are two natural ways to define coactions of B s (n) on the set of noncommutative polynomials. The first considers the set of noncommutative polynomials as a linear space, the coaction of B s (n) defined as a coaction on the linear space will be called the linear coaction of B s (n) on the set of noncommutative polynomials. The second way defines the coaction of B s (n) by considering the set of noncommutative polynomials as an algebra, the coaction of B s (n) defined as a coaction on the algebra will be called the algebraic coaction of B s (n) on the set of noncommutative polynomials. With these three coactions of the quantum semigroups on the set of noncommutative polynomials with n-indeterminants, we can describe three invariance conditions for the joint distribution of any sequence of n random variables (x 1 , .., x n ). We will show that the invariance conditions which are determined by the algebraic coaction of B s (n) and the coaction of B s (n) are too strong such that if the joint distribution of the sequence of n random variables (x 1 , ..., x n ) satisfies one of the invariance conditions then x 1 = x 2 = · · · = x n or x 1 = x 2 = · · · = x n = 0,respectively. In this paper, we are mainly concerned with the invariance conditions which are determined by the linear coactions of the quantum semigroups B s (n)'s. Before proving the main theorems, we will study tail algebras in W * probability spaces with non-degenerated normal states. These probability spaces are more general than W * probability spaces with faithful normal states. There will be a brief discussion on why we should consider these more general spaces. Unlike W * probability spaces with faithful normal states, we will define two kinds of tail algebras, one contains the unit of the original algebra and the other(T ) does not. The definitions and the properties of the two tail algebras will be given in section 7. As Köstler does in [11], we will define our conditional expectation by taking the WOT limit of "shifts". One of the differences between our work and Köstler' result is that our tail tail algebra may not contain the unit of the original algebra. Then, we will prove the following theorem for the two different cases: Theorem 1.1. Let (A, φ) be a W * −probability space and (x i ) i∈N be an infinite sequence of selfadjoint random variables which generate A as a von Neumann algebra and the unit of A is (not) contained in the WOT closure of the non unital algebra generated by (x i ) i∈N . Then the following are equivalent: a) The joint distribution of (x i ) i∈N satisfies the invariance condition associated with the linear coactions of the quantum semigroups B s (n)'s. b) The sequence (x i ) i∈N is identically distributed and boolean independent with respect to the φ−preserving conditional expectation E onto the non unital(unital) tail algebra of the (x i ) i∈N One can see that the definitions of A s (n) and B s (n) in section 2 and 3 for details. It should be mentioned here that Wang's quantum permutation group A s (n) is a quotient algebra of B s (n) for each n. Moreover, both of the invariance conditions associated with the linear coactions and the algebraic coactions of the quantum semigroups B s (n)'s are stronger than the invariance condition associated with the quantum permutations A s (n)'s The paper is organized as follows: In Section 2, we recall the basic definitions and notation from noncommutative probability theory, Wang's quantum groups and exchangeable sequence of random variables. In Section 3, we introduce our quantum semigroups B s (n) and their sub quantum semigroups B s (n) and the linear coactions of the quantum semigroups B s (n) on the set of the noncommutative polynomials. We will define the invariance condition associated with the linear coaction of B s (n). In section 4, we have a brief discussion on the relation between freeness and boolean independence. We show that operator valued boolean independence implies operator valued freeness in some special cases. In section 5, we will prove that the joint distribution of an infinite sequence of boolean independent operator valued random variables are invariant under the linear coactions of B s (n)'s. In section 6, we recall the properties of the tail algebra of any infinite exchangeable sequence of noncommutative variables and study the properties of the tail algebra under the boolean independent condition. In section 7, we will prove the main theorems and provide some examples. In section 8, we define the coaction of B s (n) and the algebraic coaction of B s (n) on the set of noncommutative polynomials with n indeterminants and then define the invariance conditions associated with these coactions. We will study the set of random variables (x 1 , ...x n ) whose joint distribution satisfies one of the invariance conditions.
Preliminaries and Notation
2.1. Noncommutative probability space. We recall some necessary definitions and notation of noncommutative probability spaces. For further details, see texts [12], [14], [2], [20]. Definition 2.1. A non-commutative probability space (A, φ) consists of a unital algebra A and a linear functional φ : The elements of A are called random variables. Let x ∈ A be a random variable, then its distribution is a linear functional µ x on C[X]( the algebra of complex polynomials in one variable), defined by µ x (P ) = φ(P (x)).
Note that we do not require the state on W * −probability space to be tracial. We will specify the probability spaces we concern in section 6 and section 8.
Definition 2.2. The algebra of noncommutative polynomials in |I| variables, C X i |i ∈ I , is the linear span of 1 and noncommutative monomials of the form X k 1 i 1 X k 2 i 2 · · · X kn in with i 1 = i 2 = · · · = i n ∈ I and all k j 's are positive integers. For convenience we will use C X i |i ∈ I 0 to denote the set of noncommutative polynomials without constant term.
Let (x i ) i∈I be a family of random variables in a noncommutative probability space (A, φ). Their joint distribution is the linear functional µ : C X i |i ∈ I → C defined by Remark 2.3. In general, the joint distribution depends on the order of the random variables, e.g µ x,y may not equal µy, x. According to our notation, µ x,y (X 1 X 2 ) = φ(xy), but µ y,x (X 1 X 2 ) = φ(yx).
Definition 2.4. Let (A, φ) be a noncommutative probability space, a family of unital subalgebras (A i ) i I is said to be free if φ(a 1 · · · a n ) = 0, whenever a k ∈ A i k , i 1 = i 2 = · · · = i n and φ(a k ) = 0 for all k. Let (x i ) i∈I be a family of random variables and A i 's be the unital subalgebras generated by x i 's respectively. We say the family of random variables (x i ) i∈I is free if the family of unital subalgebras (A i ) i∈I is free.
Definition 2.5. Let (A, φ) be a noncommutative probability space, a family of (usually non-unital) subalgebras {A i |i ∈ I} of A is said to be boolean independent if A set of random variables {x i ∈ A|i ∈ I} is said to be boolean independent if the family of non-unital subalgebras A i , which are generated by x i respectively, is boolean independent.
One refers to [8] for more details of boolean product of random variables. Since the framework for boolean independence is a non-unital algebra in general, we define our operator valued probability spaces as following: for all b 1 , b 2 ∈ B and a ∈ A. According to the definition in [17], we call E a conditional expectation from A to B if E is onto,i.e., E[A] = B. The elements of A are called random variables.
It should be pointed out here that A and B are unital and share the same unit in operator valued free probability theory. Definition 2.7. For an algebra B, we denote by B X the algebra which is freely generated by B and the indeterminant X. Let 1 X be the identity of C X , then B X is set of linear combinations of the elements in B and the noncommutative monomials b 0 Xb 1 Xb 2 · · · b n−1 Xb n where b k ∈ B ∪ {C1 X } and n ≥ 1. The elements in B X are called B polynomials. In addition, B X 0 denotes the subalgebra of B X which doesn't contain the constant term i.e. the liear span of the noncommutative monomials b 0 Xb 1 Xb 2 · · · b n−1 Xb n where b k ∈ B ∪ {C1 X } and n ≥ 1. B X 0 .
Given an operator valued probability space (A, B, E : A → B), s.t. A and B are unital. a family of unital subalgebras {A i ⊃ B} i∈I is said to free independent with respect to E if E[a 1 · · · a n ] = 0, whenever i 1 = i 2 = · · · = i n , and a k ∈ A i k , E[a k ] = 0 for all k. A family of (x i ) i∈I is said to be free independent over B, if the unital subalgebras {A i } i∈I which are generated by x i and B respectively are free, or equivalently Let {x i } i∈I be a family of random variables an operator valued probability space (A, B, E : A → B), A B are not necessarily unital. {x i } i∈I is said to be boolean independent over B if for all i 1 , · · · , i n ∈ I, with i 1 = i 2 = · · · = i n and all B-valued polynomials p 1 , · · · , p n ∈ B X 0 such that 2.2. Wang's quantum permutation groups. In [24], Wang introduced the following quantum groups.
Definition 2.8. A s (n) is defined as the universal unital C * -algebra generated by elements u ij (i, j = 1, · · · n) such that we have • each u ij is an orthogonal projection, i.e. u * ij = u ij = u 2 ij for all i, j = 1, · · · , n, • the elements in each row and column of u = (u ij ) n i,j=1,··· ,n form a partition of unit, i.e. are orthogonal and sum up to 1: for each i = 1, · · · , n and k = l we have u ik u il = 0 and u ki u li = 0; and for each i = 1, · · · , n we have n k=1 u ik = 1 = n k=1 u ki A s (n) is a compact quantum group in the sense of Woronowicz [23], with comultiplication, counit and antipode given by the formulas: The right coaction of A s (n) on C X 1 , ..., X n is a linear map α : C X 1 , ..., X n → C X 1 , ..., X n ⊗ A s (n) given by: where ⊗ denotes the algebraic tensor product.
In the earlier papers, α is defined as an algebraic homomorphism. We emphasis on the linearity here because we will define coactions of our quantum semigroups on the noncommutative polynomials in a similar way. The right coaction has the following property: Let (x i ) i∈N be an infinite sequence of random variables in a noncommutative probability space (A, φ), the sequence is said to be quantum exchangeable if their joint distribution is invariant Wang's quantum permutation groups, i.e. for all n, we have µ x 1 ,...xn (p)1 As(n) = µ x 1 ,...,xn ⊗ id As(n) (α(p)) where µ x 1 ,...,xn is the joint distribution of x 1 , ..., x n with respect to φ and p ∈ C X 1 , ..., X n . For example, we have Let S n be the permutation group on {1, ..., n}. The joint distribution of (x i ) i∈N is said be exchangeable if for all n, σ ∈ S n , we have where µ x 1 ,...,xn is the joint distribution of x 1 , ..., x n with respect to φ . It was showed that infinite sequences of random variables are exchangeable if they are invariant under Wang's quantum permutation groups in [12].
Quantum semigroups
Our probabilistic symmetries will be given by the invariance conditions associated with certain coactions of our quantum semigroups. We recall the related definitions and notation of quantum semigroups first. A quantum space is an object of the category dual to the category of C * -algebras ([22]). For any C * -algebras A and B, the set of morphisms Mor(A, B) consists of all C * -algebra homomorphism φ acting from A to M(B),where M(B) is the multiplier algebra of B ,such that φ(A)B is dense in B. If A and B are unital C * -algebras, then all unital C *homomorphisms from A to B are in Mor(A,B). In [15], Definition 3.1. By a quantum semigroup we mean a C * -algebra A endowed with an additional structure described by a morphism ∆ ∈ Mor(A, A ⊗ A) such that In the other words, ∆ defines a comultiplication on A. Here the tensor product ⊗ denotes the minimal tensor product ⊗ min . Now, we turn to introduce our quantum semigroups: Quantum semigroups (B s (n), ∆): The algebra B s (n) is defined as the universal unital C * -algebra generated by elements u i,j (i, j = 1, · · · n) and a projection P such that we have • each u i,j is an orthogonal projection, i.e. u * i,j = u i,j = u 2 i,j for all i, j = 1, · · · , n, • u i,k u i,l = 0 and u k,i u l,i = 0 We will denote the identity by I, the projection P is called the invariant projection of B s (n). On this unital C * -algebra, we can define a unital C * -homomorphism ∆ : B s (n) → B s (n) ⊗ B s (n) by the following formulas: and ∆P = P ⊗ P, ∆I = I ⊗ I We will see that (B s (n), ∆) is a quantum semigroup. To show this we need to check that ∆ defines a unital C * -homomorphism from B s (n) to B s (n) ⊗ B s (n) and satisfies the comultiplication condition : We can prove ∆(u l,i )∆u m,i = 0, for m = l, in the same way. Moreover, we have and ∆ sends the unit of B s (n) to the unit of B s (n) ⊗ B s (n). Therefore, ∆ defines a unital C * -homomorphism on B s (n) by the universality of B s (n).
The comultiplication condition holds, because on the generators we have: Remark 3.2. If we let the invariant projection to be the identity, then we get Wang's free quantum permutation group. Therefore, we have A s (n) ⊂ B s (n). Now, we provide some examples of the representations of B s (n): Let C 6 be the standard 6-dimensional complex Hilbert space with orthonormal basis v 1 , ..., v 6 . Let P 11 = P v 1 +v 2 , P 21 = P v 3 +v 4 , P 13 = P v 5 +v 6 P 21 = P v 3 +v 6 , P 22 = P v 5 +v 2 , P 23 = P v 1 +v 4 P 31 = P v 4 +v 5 , P 32 = P v 1 +v 6 , P 33 = P v 2 +v 3 and P = P v 1 +v 2 +v 3 +v 4 +v 5 +v 6 , where P v denotes the one dimensional projection onto the subspace spaned by v. Then the unital algebra generated by P i,j and P gives a representation π of B s (3) on C 6 by the following formulas on the generators of B s (3): π is well defined by the universality of B s (3). Moreover, the matrix form for P 1,1 and P with respect to the basis are are the generators of the free quantum permutation group A s (3). In general,we have .v 2n be an orthonormal basis of the standard 2n−dimensional Hilbert space C 2n , and denote v k = v k+2n for all k ∈ Z, let where P v is the orthogonal projection the one dimensional subspace generated by the vector v and P = P v 1 +v 2 +···+v 2n , 1 is the identity of B(C 2n ) . Then {P i,j } i,j=1,...,n and P satisfy the defining conditions of the algebra B s (n), Proof. It is easy to see that the inner product Now, we turn to introduce a sub quantum semigroup of (B s (n), ∆). Since P = I is a projection in B s (n), B s (n) = PB s (n)P is a C * -algebra with identity P and generators , for all positive integers k and i 1 , j 1 , ...i k , j k = 1, ...n. If we restrict the comultiplication ∆ onto B s (n), then we have is also a quantum semigroup and P is the identity of B s (n). We will call B s (n) boolean permutation quantum semigroup of n.
Remark 3.4. If we require Pu i,j = u i,j P for all i, j = 1, ..., n, then the universal algebra B s (n) we constructed in the above way is exactly Wang's quantum permutation group. Therefore, A s (n) is a quotient algebra of B s (n).
In the following definition, ⊗ denotes the tensor product for linear spaces: We say a linear functional ω : Given a complex vector space W, We say a linear map T : This definition is about coactions on linear algebras but not coactions on algebras.
Let C X 1 , ..., X n be the set of noncommutative polynomials with n indeterminants, it is a linear space over C with basis X i 1 · · · X i k for all non-negative k and i 1 , ..., i k = 1, ...n where X i 1 · · · X i k is the constant term 1 when k = 0. Now, we define a right coaction L n of B s (n) on C X 1 , ..., X n as follows: It is a well defined coaction of B s (n) on C X 1 , ..., X n , because: We will call L n the linear coaction of B s (n) on C X 1 , ..., X n .. The algebraic coaction will be defined in section 7.
Lemma 3.7. Let L n be the linear coaction of B s (n) on C X 1 , ..., X n , {u i,j } i,j=1,...n and P be the standard generators of B s (n), then Proof. Since the map is linear, it suffices to show that the equation holds by assuming p l (X) = X t l where t l ≥ 1 for all l = 1, ...k. Then, we have Notice that u jm,sim u j m,s+1 im = δ jm,s,j m,s+1 u jm,sim , the right hand side of the above equation becomes The proof is now completed The following is the invariance condition we will use to characterize conditionally boolean independence.
Definition 3.8. Let (A, φ) be a noncommutative probability space and (x i ) i∈N be an infinite sequence of random variables in A, we say the joint distribution satisfies the invariance conditions associated with the linear coactions of the boolean quantum permutation semigroups B s (n) if for all n, we have It is easy to see that invariance under boolean quantum permutation semigroups implies invariance under Wang's quantum permutation groups by assuming P equals the identity of B s (n).
This paper is mainly concerned with noncommutative polynomials with complex coefficients, though noncommutative polynomials with amalgamation over an algebra is the tool for analyzing operator valued probability theory. Since this invariance condition is based on a coaction of quantum semigroups B s (n) on linear spaces but not algebras, its generalization to operator valued coefficients noncommutative polynomials should treated in a different way and we will present it in another work.
Boolean independence and freeness
In this section, we will show that operator valued boolean independent variables are sometime operator valued free independent. Especially, in section 7, operator valued boolean independent variables are always operator valued free independent when we construct our conditional expectation in the unital-tail algebra case. The properties are related to the C * − algebra unitalization. We provide a brief review here: To every C * algebra A one can associate a unital C * algebraĀ which contains A as a two-sided ideal and with the property that the quotient C * -algebraĀ/A is isomorphic to C. Actually,Ā = {xĪ + a|x ∈ C, a ∈ A},whereĪ is the unit ofĀ. We will denote xĪ + a by (x, a) where x ∈ C and a ∈ A, then we have It is obvious thatĒ is a projection i.eĒ 2 =Ē. Hence,Ē is aB-B bimodule from unital algebra to a unital subalgebra, i.e. a conditional expectation. Let (x k , a k ) ∈Ā i k , i.e a k ∈ A i k and x i 's are complex numbers, for k = 1, · · · , n andĒ[x k , a k ] = 0 and i 1 = i 2 = · · · = i n , then we have x k = 0 for all k = 1, · · · , n and E[(x 1 , a 1 )(x 2 , a 2 ) · · · (x n , a n )] =Ē[(0, a 1 )(0, a 2 ) · · · (0, a n )] =Ē[(0, a 1 a 2 · a n )] = (0, E[a 1 a 2 · · · a n )] = (0, E[a 1 ]E[a 2 ] · · · E[a n ]) = (0, 0) = 0. andB ⊂Ā i for all i.
By checking the definition directly in the similar way, we have We also have: Proof. Since (0, A) is a two sided ideal ofĀ, for all p ∈B X 0 and a ∈ A, we have p(0, a) ∈ (0, A). Moreover, there exists p ′ ∈ B X 0 s.t. p(0, a) = (0, p ′ (a)). Therefore, for all p 1 , ...p k ∈B X 0 , i 1 = i 2 = · · · = i k we havē This is our desired conclusion. Proof. Let p ∈ B X , then p(x i ) = p 0 (x i ) + b for some p 0 ∈ B X 0 and b ∈ B. By the assumption, we have E[p( Let p 1 , ...p n ∈ B X , p k (x i k ) = 0, where i 1 = i 2 = · · · = i n . Then, for every 1 ≤ k ≤ n, we can find a noncommutative polynomial p 0k ∈ B X 0 such that 0 = p k (x i k ) = p 0k (x i k ). Therefore, Remark 4.5. In C * and W * probability spaces, the condition {p(x i )|p ∈ B X 0 } will be replaced by its norm closure and WOT (Weak operator topology) closure respectively.
operator valued Boolean random variables are invariant under
Boolean quantum permutations Fix n, k and 1 ≤ i 1 , · · · , i k ≤ n, then, in B s (n) ,we have n j 1 ,··· ,j k =1 According to the definition of B s (n), it follows that the product u i 1 ,j 1 · · · u in,jn is not vanishing only if it satisfies that i t = i t+1 iff j t = j t+1 .
Definition 5.1. Given a set S, a collection P of disjoint nonempty sets V 1 , · · · , V r is called a partition of S if i V i = S. If S is ordered and the partition P = {V 1 , · · · V r } can be reordered as P = {W 1 , · · · , W r } such that a < b for all a ∈ W s , b ∈ W t , s < t, then we call P a interval partition of S. The collection of interval partitions of S will be denoted by P I (S). We will always write interval partitions in order, e.g., Let I n = I × I × · · · × I be the n-fold Cartesian product of the index set I, then we can define an equivalent relation ∼ on I n . Two sequences of indices (i m ) m=1,··· ,n ∼ P I ([n]) (j m ) m=1,··· ,n if the two sequences are compatible with the same block.
Theorem 5.5. Let (A, B, E : A → B) be an operator valued probability space, let A be unital and{x i } i∈N be an infinite sequence of random variables in A which is identically distributed and boolean independent with respect to E. For every linear functional φ on B, φ extends to a linear functionalφ on A by lettingφ(·) = φ(E[·]). Then the joint distribution of the sequence {x i } i∈N with respect toφ satisfies the invariance conditions associated with the linear coactions of the boolean permutation quantum semigroups B s (n)'s.
Properties of Tail Algebra for Boolean Independence
In order to study boolean exchangeable sequence of random variables we need to choose a suitable kind of noncommutative probability spaces. It is pointed by Hasebe that the W * -probability with faithful normal states does not contain boolean independent random variables with Bernoulli law. It is necessary to consider W * probability spaces with more general states in the following definition: Definition 6.1. Let A be a von Neumann algebra, a normal state φ on A is said to be non-degenerated if x = 0 whenever φ(axb) = 0 for all a, b ∈ A.
Remark 6.2. By proposition 7.1.15 of [10] If φ is a non-degenerated normal state on A then the GNS representation associated to φ is faithful. Every faithful normal state on A is faithful on all its subalgebras but non-degenerated normal states on A are not necessarily non-degenerated on A's subalgebras. We can see that elements contribute nothing to the system are 0.
Let (A, φ) be a W * −probability space with non-degenerated normal state φ, suppose A is generated by an infinite sequence of random variables {x i } i∈N , whose joint distribution is invariant under the linear coaction of the quantum semigroups B s (k). Let A 0 be the non-unital algebra over C generated by {x i } i∈N . In this section, we assume that 1 A is contained in the weak closure of A 0 . We will denote the GNS construction associated to φ by (H ξ , ξ, π), then there is a linear map· : A 0 → H ξ such thatâ = π ξ (a)ξ for all a ∈ A 0 . In the usual sense, the tail algebra A tail of {x i } i∈N is defined as: where vN{x k |k ≥ n} is the von Neumann algebra generated by {x k |k ≥ n}. We will call A tail unital tail algebra in this paper. In this section, the range algebra we will use is a "non-unital tail algebra" T . The non-unital tail algebra T of {x i } i∈N is given by the follows: where W * {x k |k ≥ n} is the WOT closure of the non-unital algebra generated by {x k |k ≥ n}. If the identity is contained in the algebra, then T is also the unital tail-algebra of {x i } i∈N . For convenience, we denote A n by the unital algebra generated by {x k |k > n}. Now, we turn to define our T -linear map, the method comes from [12]. Because we are dealing with von Neumann algebras with non-degenerated normal states which are more general than the faithful stats, it is necessary to provide a complete construction here. In [11],the normal conditional expectation Köstler constructed via the shift of the random variables requires the sequence only to be spreadable. But in our situation, the existence of the normal linear map relies on the invariance under the quantum semigroups B s (k)'s. Lemma 6.3. Let A be a von Neumann algebra generated by an infinite sequence of selfadjoint random variable (x i ) i∈N , φ be a non-degenerated normal state on A. If the sequence ( Proof. Let (H ξ , ξ, π) be the GNS construction associated to φ. We have that {â|a ∈ A 0 } is dense in H ξ . For each n ∈ N, denote by A [n] the non-unital algebra generated by . We can assume y = p(x 1 , ..., x N ) for some p ∈ C X 1 , ..., X N 0 , then we have We can define an isometry U from H ξ to its subspace H ′ which is generated by {â|a ∈ A 1 } by the following formula: for all i 1 , ..., i k ∈ N. Since φ gives a faithful representation to A, it gives a faithful representation to A · 0 . For all y ∈ A 1 , according to the faithfulness, we have y 2 = sup{ y * yâ,â â,â |a ∈ A 0 ,â = 0} = sup{ φ(a * y * ya) φ(a * a) |a ∈ A 0 , φ(a * a) = 0}.
Therefore, α extends to a C * isomorphism from π(A) to π(A 1 ). Because (H, ξ, π) and (H 1 , ξ 1 , π 1 ) are faithful GNS representations for A and A 2 respectively, there is a well defined endomorphism α : For y ∈ A,we will denote π(y)ξ by yξ for convenience because the GNS representation of the von Neumann algebra associated to φ is faithful. Since W * {x k |k ≥ n}'s are WOT closed, their intersection is a WOT subset of A. Following the proof of proposition 4.2 in [12], we have Lemma 6.4. For each a ∈ A 0 , {α n (a)} n∈N is a bounded WOT convergent sequence. Therefore, there exists a well defined φ-preserving linear map E : A → T φ by the following formula: for a ∈ A 0 Proof. By lemma 6.3, there is a norm preserving endomorphism α of A 0 such that For I ∈ N, denote by A I the non-unital algebra generated by {x i |i ∈ I}. Suppose a, b, c ∈ |I|<∞ A I , so we can assume a ∈ A I ,b ∈ A J and c ∈ A K for some finite I, J, K ⊂ N. Because I, J, K are finite, there exists N such that (I ∪ K) ∩ (J + n) = ∅, for all n > N. We infer from the exchangeability that φ(aα n (b)c) = φ(aα n+1 (b)c) for all n > N. This establishes the limit on the weak * -dense * -algebra |I|<∞ A I . We conclude from this and {α n (b)} n∈N is bounded that the pointwise limit of the sequence α defines a linear map E : To extend E to the W * −algebra A, we need to make use of the boolean invariance conditions. Lemma 6.5. Let (A, φ) be a noncommutative probability space, {x i } i∈N ⊂ A be an infinite sequence of random variables whose joint distribution is invariant under the linear coactions of the quantum semigroups B s (k)'s, then whenever i 1 = i 2 = · · · = i n , and k 1 , ..., k n ∈ N Proof. If i l = i m for all l = m, then the statement holds by the exchangeability of the sequence. Suppose the number i l appears m times in the sequence,they are i l j j = 1, ..., m such that i l j = i l 1 and l 1 < l 2 < · · · < l m . Since the sequence is finite, we can assume that i 1 , ..., i n ≤ N + 1 and i l j = N + 1 for j = 1, ...m by the exchangeability.
For each M ∈ N, by lemma 4.2, we have the following representation π M of the quantum semigroup B s (M + N): and π(P) = P , where p i,j and p are projections in B(C 2M )given in lemma 4.2. Then we have According to the boolean independence condition we have: · · · x kn in )P P j l 1 ,i l 1 P P j l 2 ,i l 2 P · · · u j lm ,i lm P In the first part of the sum, by the exchangeability, we will have N +2 · · · x kn in )P as M goes to ∞. In the second part of the sum, we have which is bounded, then we have: which goes to 0 as M goes to ∞. By now, we have showed that if there are indices i s = i t for s = t in the the sequence, we can send them to two different large numbers j s , j t such that j s , j t differ the other indices and the value of the monomial does not change. After a finite steps, we will have , such that all the j l ' are not equal to any of the others. Therefore, we get our conclusion by the exchangeability. Corollary 6.6. {x i } i∈N ⊂ (A, φ) be an infinite sequence of random variables whose joint distribution is invariant under the linear coactions of the quantum semigroups B s (k)'s, then φ(x k 1 i 1 x k 2 i 2 · · · x kn in ) = φ(x k 1 j 1 x k 2 j 2 · · · x kn jn ) whenever i 1 = i 2 = · · · = i n ,, j 1 = j 2 = · · · = j n , k 1 , ..., k n , j 1 , ...j n ∈ N. Moreover, we have , whenever i 1 = i 2 = · · · = i n ,, j 1 = j 2 = · · · = j n ,,k 1 , ..., k n , j 1 , ...j n > M and a, b ∈ A [M ] for some M.
Lemma 6.7. For all a, b, y ∈ A 0 , we have Proof. Because the elements in A 0 are finite linear combinations of the noncommutative monomials, it suffices to show the property in the case: b * = x r 1 i 1 · · · x r l i l , y = x s 1 j 1 · · · x sm jm , a = x t 1 k 1 · · · x tn kn , where i 1 = i 2 = · · · = i l , so are j 1 , .., j m and k 1 , ...k n , all the power indices are positive integers. Let N = max{i 1 , ..., i l , j 1 ...j m , k 1 , ..., k n }, then for every L > N, we have i l = j 1 + L and j m + L = k 1 . Therefore, we have The same we will have Since {â|a ∈ A 0 } is dense in H ξ , we get our conclusion.
Let y ∈ A, and {y n } n∈N ⊂ A 0 be a bounded sequence such that y n converges y in WOT, then for all a, b ∈ A 0 ,we have Therefore, E[y n ] converges to an element y ′ in pointwise weak topology, by the lemma above, we see that y ′ is independent of the choice of {y n } n∈N . Since {E[y n ]} n∈N ⊂ T φ , we have y ′ ∈ T . By by now, we have defined a linear map E : A → T and for all a, b ∈ A 0 . Therefore, E is normal. Proof. Let a ∈ T and b, c ∈ |I|<∞ A I , then there exists N ∈ N such that a ∈ A N +1 w * and b, c ∈ A [N ] . We can approximate a ∈ T ⊂ A N +1 in the WOT by a sequence in (a k ) k∈N . According to the definition of E and the exchangeability, we have This shows E(a) = a for all a ∈ T , it follows that E is a φ−preserving T − T bimodule of A on the T Lemma 6.11. E[ax] = aE[x] for all a ∈ T and x ∈ A.
Proof. First, we suppose x ∈ A 0 , then x ∈ A [N ] for some N ∈ N. Since a ∈ T ⊂ A N +1 w * , there exists a sequence {y n } n∈N ⊂ A N +1 such that y n converges to y in WOT. For all b, c ∈ A 0 , by lemma 6.7 we have in the case. The conclusion follows the WOT continuity of the map.
Proof. Given a, b ∈ A 0 , then there exists M s.t a, b ∈ A [M ] , we have Since {â|a ∈ A 0 } is dense in H, we get our desired conclusion. Corollary 6.13.
Then, by lemma 6.12 · · · x kn in ] the last two equations follow the WOT continuity of E and weakly.
Proof. By linearity, we can assume that b i 's are "monomials" i.e. b j = x i j,1 · · · x i j,r j where i j,j ′ > N. Then, x is,r s x ks is · · · x i t,1 · · · x it,r t x kt it · · · b n x kn in , i s,1 ≥ N + 1 > i s−1 and i t,rt ≥ N + 1 > i t+1 . Therefore, by lemma 6.13, Proposition 6.15. Let (A, φ) be a W * -probability space and (x i ) i∈N be a sequence of of selfadjoint elements in A whose joint distribution is invariant of under the boolean permutations. Let E be the conditional expectation onto the non-unital tail algebra T of the sequence. Then E has the following factorization property: for all n, k ∈ N all polynomials p 1 , ..., p n ∈ T X 1 , ..., X k 0 and all i 1 , ..., i n ∈ {1, ..., k}, then we have Proof. It suffices to prove the statement in the case of p 1 , ..., p n are T -monomials but not constant. We can assume that , where b i,j ∈ T and t ′ i,j s are positive integers. Let N = max{i 1 , ..., i n }, then b i,j ∈ T ⊂ A N +1 w * . From Kaplansky theorem, for every b i,j , we can find a bounded sequence {b l,i,j } l∈N such that b l,i,j converges to b i,j in strong operator topology (SOT). Let p n,i (X) = b n,i,0 X t i,1 b n,i,1 X t i,2 b n,i,2 · · · X k i t i , then p l,k (x i k ) converges to p k (x i k ) in SOT. By the WOT continuity of E, we have By lemma 6.14, we have . By E' WOT continuity, we have that the last equality follows E's WOT continuity.
Main theorem and examples
7.1. non-unital tail algebra case. Theorem 7.1. Let (A, φ) be a W * −probability space and (x i ) i∈N be a sequence of selfadjoint random variables. Suppose A is the WOT closure of the non-unital algebra generated by (x i ) i∈N and φ is non-degenerated. Then the following are equivalent: a) The joint distribution of (x i ) i∈N satisfies the invariance condition associated with the linear coactions of the quantum semigroups B s (n)'s. b) The sequence (x i ) i∈N is identically distributed and boolean independent with respect to the φ−preserving normal conditional expectation E onto the non-unital tail algebra T of the (x i ) i∈N Proof. a) ⇒ b) follows proposition 6.15 by choosing m = 1, we have whenever i 1 = ı 2 = · · · = i n , p 1 , ..., p n ∈ T X 0 , which is our desired conclusion. b) ⇒ a) is a special case of theorem 5.5 7.2. Unital tail algebra case. Let (A, φ) be a W * probability space with non-degenerated normal state φ and (x i ) i∈N be a sequence of selfadjoint random variables. Suppose A is the WOT closure of the unital algebra generated (x i ) i∈N and φ is non-degenerated. Again, we denote by A 0 be the non-unital algebra generated by (x i ) i∈N . Denote I A the unit of A, we have considered the case that 1 A is contained in A 0 for all x, y ∈ A 0 w * , Forx,ȳ ∈ A, there exists c 1 , c 2 ∈ C and x, y ∈ A 0 w * s.t. x = c 1 I 2 + x and y = c 2 I 2 + y, then φ(xaȳ) = φ(xab) = 0. Since ourx,ȳ are chosen arbitrarily, we have a = 0. Therefore (A 0 w * , 1 φ(I 1 ) φ) is a W * − probability space with non-degenerated normal state which we considered before. Let A tail be the unital tail algebra of (x i ) i∈N in (A, φ) and T be the non-unital tail algebra of (x i ) i∈N in (A 0 w * , 1 φ(I 1 ) φ). Then we have Since A 0 w * is a two-sided ideal of A. Forx ∈ A tail ,x = aI A + x for some x ∈ T and a ∈ C. By theorem 7.2, there is a φ preserving normal conditional expectation E from A 0 w * onto T . As we discussed in section 7, we can extend this conditional expectation E from the unitalization of A 0 w * to the unitalization of T . The unitalizations of the two algebras are isomorphic to A and A tail respectively. We have Lemma 7.2. The conditional expectationĒ is φ-preserving and normal.
Proof. The normality is obvious, we just check the φ-preserving condition here. Let xA,thenx = aI A + x for some x ∈ A 0 w * and a ∈ C, then we have the last equality follows the fact that E is a φ(I 1 )φ preserving conditional expectation in (A 0 w * , 1 φ(I 1 ) φ). Together with proposition 5. 5 We have the following theorem for our unital case: Theorem 7.3. Let (A, φ) be a W * −probability space and (x i ) i∈N be a sequence of selfadjoint random variables. Suppose I A the unit of A is not contained in the WOT closure of the non-unital algebra generated by (x i ) i∈N and φ is non-degenerated. Then the following are equivalent: a) The joint distribution of (x i ) i∈N satisfies the invariance condition associated with the linear coactions of the quantum semigroups B s (n)'s. b) The sequence (x i ) i∈N is identically distributed and boolean independent with respect to the φ−preserving normal conditional expectation E onto the non-unital tail algebra A tail of the (x i ) i∈N .
Examples.
In this subsection, we provide two examples for the main theorems. For the details of the examples, see [4] and [7] . Non-unital case Let H be a Hilbert space with orthonormal basis {e i } i∈N+∪{0} , we define a sequence of operators {x n } n∈N as follows: x n e 0 = e n , and x n e i = δ n,i e 0 for i ∈ N ,let A be von Neumann algebra generated by {x n } n∈Z + , then e 0 is cyclic for A. Since A contains all finite rank operators, A is B(H) and A is the WOT closure of the non unital algebra generated by {x n } n∈N . Let φ be the vector state φ(·) = ·e 0 , e 0 , then we can easily check that the random variables x i 's are identically distributed and boolean independent. The tail algebra is CP e 0 which does not contain the unit of B(H). The condition expectation E is given by the following formula: for all x ∈ A. Unital case Let H 1 = H ⊕ Ce −1 be the direct sum of the Hilbert space H with orthonormal basis {e i } i∈N∪{0} and CP e −1 Again, we define a sequence of operators {x n } n∈N as follows: x n e 0 = e n , and x n e i = δ n,i e 0 for i ∈ N ,let A be von Neumann algebra generated by {x n } n∈N , then A = B(H) ⊕ CP e −1 , the WOT-closure of the non-unital algebra generated by {x n } n∈N is not the whole algebra A but B(H) ⊕0. Let φ be the vector state φ(·) = 1 2 ·e 0 + e −1 , e 0 + e −1 , then we will also have that the random variables x i 's are identically distributed and boolean independent. The unital tail algebra is CI H 1 ⊕CP e 0 which contains the unit of B(H 1 ).The conditional expectation E is given by the following formula: for all a ∈ A.
Two more kinds of noncommutative probabilistic symmetries
Since C X 1 , ..., X n is an algebra which is freely generated by n indeterminants X 1 , ..., X n . It would be natural to define coactions of quantum semigroups on C X 1 , ..., X n as algebraic homomorphisms but not only linear maps. In this section, we will study the probabilistic symmetries associated with algebraic coactions of the quantum semigroups B s (n) and B s (n) on C X 1 , ..., X n . We we will define the invariance condition for the joint distribution of a sequence of noncommutative random variables in the similar form as we did in previous sections. Now, let us consider C X 1 , ..., X n as an algebra and define the coactions of the quantum semigroups B s (n) to be a homomorphism L ′ n : C X 1 , ..., X n → C X 1 , ..., X n ⊗ B s (n) by the following formulas: L ′ n (1) = 1 ⊗ I, L ′ n (X i ) = n k=1 X k ⊗ Pu k,i P, then we would have L n (X i 1 · · · X i k ) = n j 1 ,...j k =1 X j 1 · · · X j k ⊗ Pu j 1 ,i 1 P · · · Pu jn,in P, and ( L ′ n ⊗ id Bs(n) ) L ′ n = (id Cn ⊗ ∆) L ′ n . We will call L ′ n the algebraic coaction of B s (n) on C X 1 , ..., X n . The invariance condition is so strong that we can get our conclusion in the finitely generated probability spaces.
Proposition 8.1. Let (A, φ) be a W * probability space with non-degenerated state φ, fix n ∈ N, (x i ) i=1,...,n be a sequence of selfadjoint noncommutative random variables in A. | 2014-10-27T02:58:11.000Z | 2014-03-07T00:00:00.000 | {
"year": 2014,
"sha1": "1d1f4d63b5d4853f7c61234f0d62c7abb7119504",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jfa.2015.07.007",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "1d1f4d63b5d4853f7c61234f0d62c7abb7119504",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
268577783 | pes2o/s2orc | v3-fos-license | Presenting Eco-Anatomical Data for Saponaria jagelii, a Species on the Edge of the Blade
The seeds, roots, leaves, flowers and fruits of the critically endangered (CR) species Saponaria jagelii Phitos & Greuter (Caryophyllaceae) were studied. The morphology of the seeds was investigated with scanning electron microscopy. The seeds were imbibed, germinated and developed into young plants. These plants, along with strictly selected wild-growing plants, were used for optical microscopic observations. The leaves and flowers were observed with scanning electron microscopy as well. At least two types of active glandular trichomes were detected on both the leaves and the calyxes of the flowers. The structures of the primary and secondary roots were also investigated. The roots turned into secondary structures very quickly and very close to the root tip. Light microscopy and histochemical reagents were employed to detect secondary metabolites of interest in the leaves. All the metabolites detected were already reported to be synthesized in stressed plants. Distribution data are presented. Conservation actions based on the habitat morphology and the human activities within it, such as the limitation of beach access during the seed-dispersing period and the prohibition of vehicle usage, are recommended in order to protect this tolerant yet severely stressed plant species.
Introduction
Saponaria jagelii Phitos & Greuter (Caryophyllaceae) is a small, annual, narrow endemic species of Greece, with an erect, robust 3-10 cm long branching stem, thriving only on two scattered, very restricted localities in the Western part of the small island of Elafonisos (25 km 2 ) [1].This island is located 600 m away from the southern coast of the Peloponnese.It has been reported, though without confirmation, that the species also exists on the Malea peninsula (South Peloponnese).Recently, the species has also been reported to exist on the sand dunes located on the south-east coast of the island of Limnos (North Aegean Sea), but no detailed data were published [2].
The name of the genus Saponaria derives from the Latin word sapo-saponis, referring to soap, indicating the use of some species of the genus in soap making.According to the Flora of Greece, only eight species of the genus are found in Greece.Two of them are Greek endemic: Saponaria jagelii and Saponaria aenesia.The latter is endemic to the island of Kefallinia, on Mount Ainos.The other endemic species, Saponaria jagelii, was named after A. Jagel, the student who first collected this species [1].
The plants grow exclusively in EU priority habitat 2120-along the shoreline with Ammophila arenaria (white dunes)-in the NATURA2000 site GR2540002 "Periochi Neapolis kai Nisos Elafonisos" (Figure 1a).Their stems are reddish and covered with scattered glandular trichomes in the upper part [1].The lower part is rather smooth.The leaves are fleshy and reddish-green, 1-4.5 cm long and lanceolate (Figure 1b).The leaf margins occasionally display delicate trichomes (Figure 1b).The upper leaf surface and leaf stalks present thick pubescence as well.The calyx is cylindrical and reddish, bears short teeth present thick pubescence as well.The calyx is cylindrical and reddish, bears short teeth and is covered with glandular trichomes (Figure 1b,c) [1].The petals are purple and tapered toward the base (Figure 1c) [1].The flowering period extends from the end of March until early May, while the fruiting season extends from early May until early June.Dispersal occurs shortly after when the fruit, a nearly cylindrical capsule, opens and releases the seeds.The species seems to be a part of the typical plant community growing in disturbed areas along sand dunes [1,[3][4][5].The most frequent "neighbors" of S. jagelii are Euphorbia paralias L., Ammophila arenaria, Pancratium maritimum (with greater spatial spread than a year ago), Matthiola tricuspidata (Figure 1c), Medicago marina L., Silene sedoides Poir., Centaurea raphanina subsp.mixta and Anagallis arvensis.
The species has been categorized as CR (critically endangered) according to IUCN Red List Criteria B1ab (i, ii, iii, v) + B2ab (i, ii, iii, v) [4] because it is known to occur only at two sites covering a very small area on two restricted sandy seashores.Furthermore, the quality of the habitat and the number of individuals is expected to decline.
Tourism is rapidly developing on the island, and several human activities on the beaches, such as the transit of motor vehicles and trampling by visitors, represent major threats, especially during the flowering period (threat 1.3: Tourism and recreation areas; threat 6.1: Recreational activities) [6].Finally, the introduction of alien, invasive species (Aptenia cordifolia and Carpobrotus edulis) from neighboring private property also imposes extra pressure on the species for their survival.
Moreover, to the best of our knowledge, no single piece of information is available for the anatomical features of the leaves, stems, roots, flowers and seeds or for the The species seems to be a part of the typical plant community growing in disturbed areas along sand dunes [1,[3][4][5].The most frequent "neighbors" of S. jagelii are Euphorbia paralias L., Ammophila arenaria, Pancratium maritimum (with greater spatial spread than a year ago), Matthiola tricuspidata (Figure 1c), Medicago marina L., Silene sedoides Poir., Centaurea raphanina subsp.mixta and Anagallis arvensis.
The species has been categorized as CR (critically endangered) according to IUCN Red List Criteria B1ab (i, ii, iii, v) + B2ab (i, ii, iii, v) [4] because it is known to occur only at two sites covering a very small area on two restricted sandy seashores.Furthermore, the quality of the habitat and the number of individuals is expected to decline.
Tourism is rapidly developing on the island, and several human activities on the beaches, such as the transit of motor vehicles and trampling by visitors, represent major threats, especially during the flowering period (threat 1.3: Tourism and recreation areas; threat 6.1: Recreational activities) [6].Finally, the introduction of alien, invasive species (Aptenia cordifolia and Carpobrotus edulis) from neighboring private property also imposes extra pressure on the species for their survival.
Moreover, to the best of our knowledge, no single piece of information is available for the anatomical features of the leaves, stems, roots, flowers and seeds or for the germination ability of this critically endangered plant.It seems that determining the optimal conditions for seed germination will facilitate the ex-situ conservation of this species.
Life 2024, 14, 398 3 of 13 Concerning all the above, we launched this investigation intending to establish a detailed description of some structural and a few ecophysiological features to facilitate the conservation and ex-situ culture of a plant that, regrettably, has poor survival prospects in the second half of the twenty-first century [6].
Materials and Methods
Visits to the site We approached the two restricted localities on the small island of Elafonisos in two consecutive years: 1st year Visit: 25 May 2022, at the end of flowering season.Sampling: seeds were collected.2nd year Visit: 1-3 April 2023, mid-flowering season.Sampling: capsules and seeds were collected.
Seed Germination
Intact capsules were collected in-situ during our visits to the habitat.They were placed in specific incubators (Heraeus B5050, West Midlands, UK) for desiccation at a constant temperature (17 • C).The humidity was gradually reduced.After three months, remnants of the pericarp were removed so that the seeds were completely clean, and they were transferred back to the incubator for 1 month.Seed germination took place in a P-Selecta incubator (Model No. 2000238, Barcelona, Spain) ventilated through a HAILEA ACO-9160 (China) at an output of 4 L/min.Then, they were transferred to culture chambers (elvem-model BOD100, GR) under controlled conditions (temperature/light), and their germination ability was tested at various temperatures (5, 10, 12.5, 15 and 20 • C).These seedlings, along with two individuals transferred from the habitat (see below), were fixed for microscopic observations.In addition, unsplit capsules were ruptured in the lab in order to count the number of seeds per capsule.
Microscopy
During our first visit to the site (2-4 April 2022), we detached only two plants for further treatment.Small parts from the middle of the upper leaves, the roots close to the root tip and the stems were removed at random.Whole mounts of flowers and seeds were also obtained.The tissues were fixed in phosphate-buffered 3% glutaraldehyde (Merck KGaA, Darmstadt, Germany-pH 6.8) at 0 • C for 2 h and post-fixed in phosphate-buffered 1% osmium tetroxide (Merck KGaA, Darmstadt, Germany).They were dehydrated in graded ethanol series.The properly prepared pieces underwent one of the following: (a) The pieces were transferred to 100% acetone, critical-point-dried (Autosamdri ® -815, Tousimis, Rockville, MD, USA), double coated with gold and platinum and viewed with a JEOL JSM-6360 high-vacuum scanning electron microscope (Tokyo, Japan).All electron micrographs were taken with the instrument's built-in camera (accelerating voltage 20 kV; spot size 50).
(b) The tissue, dehydrated in absolute ethanol, was transferred to propylene oxide and imbued in gradually increasing concentrations of Durcupan ACM (Fluka, Steinheim, Switzerland) (four-component epoxy resin).Finally, the tissue was left in pure Durcupan to polymerize at 70 • C for 36 h.Semithin sections obtained with glass knives from an LKB Ultrotome III (Sweden) were placed on glass slides and stained with 0.5 toluidine blue O (in 1% borax solution) as a general stain for light microscopic observations [7].Sections of fresh or epoxy-embedded material were viewed with an OLYMPUS CX-41 light microscope (Japan).The original light micrographs were recorded digitally using a Nikon D5600 camera at 24.2 megapixels.The literature on double fixation was cited in detail by Christodoulakis et al. [8] and Christodoulakis et al. [9].All micrographs of the leaves and flowers originate from tissues of the detached wild-growing plants.The root micrographs originate from the grown seedlings.
Life 2024, 14, 398 4 of 13 2.1.3.Histochemistry A histochemical investigation was executed using sections of either fresh or plasticembedded tissues from the leaves and roots using the proper reagents.The aim was to trace secondary metabolites of special interest, mostly those crucial for the survival of the environmentally stressed plants.
The histochemical reagents employed for the semithin sections of plastic-embedded tissue (pet) were as follows: (a) Saturated Sudan black B solution in 70% ethanol [10] for the detection of lipids; (b) Saturated alcian blue solution in 3% acetic acid [11] for the detection of any stored polysaccharides; (c) 1% aniline blue black in 70% acetic acid [12] for the histochemical detection of accumulated proteins.
All stains were matched to controls.All glass mounts were observed with an OLYM-PUS CX41 optical microscope.
Seed Morphology and Anatomy-Germination
The number of seeds per capsule, counted from capsules ruptured in the lab, ranged from 7 to 14 seeds.The mean number was 9.4 seeds per capsule.The seeds of S. jagelii are more or less globular, with a diameter of 1.0 to 1.5 mm (Figures 2a and 3g).They are bitegmic, with the testa derived from the outer integument, while the tegmen originates from the inner integument (Figure 3b).They present a peculiar deformation on the side of the hilum (Figure 3e,g,h).Their developed testa looks like a surface covered with rather elliptical "tiles" (Figure 3e-h).These are the outer (epidermal) cells of the seed coat.They possess intensely stained, thick, cutinized external cell walls (Figure 3a-c).The embryo undergoes a globular stage in immature seeds, develops into the torpedo stage and bends to display the full cotyledon curvature.The mature embryo is curved at the apex.The suspensor is hardly detectable.The endosperm is of the "cellular type" and consists of large cells occupying most of the seed volume (Figure 3b,c).Observations with polarized light did not reveal any inorganic crystalline structures, as are common in the seeds but not in the leaves and roots of most halophytes (Figure 3d) [23].
The development of the germinated seeds was studied in detail (Figure 2b).The roots grow rapidly to a considerable length, while cotyledons quickly become photosynthetically active (Figure 2b).The highest germination rate was observed at 10 • C, and all seeds germinated in the dark, an indication of total photoinhibition, a "habit" common for plants growing on coastal sand dunes [24,25].
The promotion of seed germination at low temperatures (at 10-15 • C) seems to agree with the environmental conditions, particularly at ambient temperatures during the rainy season [25,26].The development of the germinated seeds was studied in detail (Figure 2b).The roots grow rapidly to a considerable length, while cotyledons quickly become photosynthetically active (Figure 2b).The highest germination rate was observed at 10 °C, and all seeds germinated in the dark, an indication of total photoinhibition, a "habit" common for plants growing on coastal sand dunes [24,25].
The promotion of seed germination at low temperatures (at 10-15 °C) seems to agree with the environmental conditions, particularly at ambient temperatures during the rainy season [25,26].
Leaf and Root Anatomy
Free-hand cross-sections of the fleshy, reddish-green leaves reveal a compact structure with delicate trichomes (Figure 4a).The leaves are dorsiventral but without the typical palisade and spongy parenchyma.The mesophyll cells on the adaxial side are somewhat elongated, while those on the abaxial side are more or less globular (Figure 4a,b).This arrangement simulates the leaf structure of succulents rather than the leaf structure of xerophytes.The cells of the epidermal tissue throughout the leaf surface are rather thin with intensely stained cell walls (Figure 4b).The conductive tissue is far from being developed.There is no mechanical tissue for the support of the conductive bundles, the dispersion of which is rather sparse.Stomata are observed on both sides of the leaf (Figure 4b).Stomata on both sides is a typical feature of some xerophytes, but most Mediterranean plants, especially the very-well-adapted evergreen sclerophylls (maquis) and those that are seasonally dimorphic (phrygana), are hypostomatic.Young leaves have long, pilate (uniseriate multicellular) secretory trichomes (Figure 4a).Their secretory heads are The development of the germinated seeds was studied in detail (Figure 2b).The roots grow rapidly to a considerable length, while cotyledons quickly become photosynthetically active (Figure 2b).The highest germination rate was observed at 10 °C, and all seeds germinated in the dark, an indication of total photoinhibition, a "habit" common for plants growing on coastal sand dunes [24,25].
The promotion of seed germination at low temperatures (at 10-15 °C) seems to agree with the environmental conditions, particularly at ambient temperatures during the rainy season [25,26].
Leaf and Root Anatomy
Free-hand cross-sections of the fleshy, reddish-green leaves reveal a compact structure with delicate trichomes (Figure 4a).The leaves are dorsiventral but without the typical palisade and spongy parenchyma.The mesophyll cells on the adaxial side are somewhat elongated, while those on the abaxial side are more or less globular (Figure 4a,b).This arrangement simulates the leaf structure of succulents rather than the leaf structure of xerophytes.The cells of the epidermal tissue throughout the leaf surface are rather thin with intensely stained cell walls (Figure 4b).The conductive tissue is far from being developed.There is no mechanical tissue for the support of the conductive bundles, the dispersion of which is rather sparse.Stomata are observed on both sides of the leaf (Figure 4b).Stomata on both sides is a typical feature of some xerophytes, but most Mediterranean plants, especially the very-well-adapted evergreen sclerophylls (maquis) and those that are seasonally dimorphic (phrygana), are hypostomatic.Young leaves have long, pilate (uniseriate multicellular) secretory trichomes (Figure 4a).Their secretory heads are
Leaf and Root Anatomy
Free-hand cross-sections of the fleshy, reddish-green leaves reveal a compact structure with delicate trichomes (Figure 4a).The leaves are dorsiventral but without the typical palisade and spongy parenchyma.The mesophyll cells on the adaxial side are somewhat elongated, while those on the abaxial side are more or less globular (Figure 4a,b).This arrangement simulates the leaf structure of succulents rather than the leaf structure of xerophytes.The cells of the epidermal tissue throughout the leaf surface are rather thin with intensely stained cell walls (Figure 4b).The conductive tissue is far from being developed.There is no mechanical tissue for the support of the conductive bundles, the dispersion of which is rather sparse.Stomata are observed on both sides of the leaf (Figure 4b).Stomata on both sides is a typical feature of some xerophytes, but most Mediterranean plants, especially the very-well-adapted evergreen sclerophylls (maquis) and those that are seasonally dimorphic (phrygana), are hypostomatic.Young leaves have long, pilate (uniseriate multicellular) secretory trichomes (Figure 4a).Their secretory heads are located on the top of multicellular stalks.These trichomes can be observed mostly on the margins of the leaves (Figure 4a,b).Aged leaves are largely free of trichomes.The density of the trichomes on the leaf and the concentrations of alkaloids rapidly decrease with the leaf age.This suggests that the functional role of trichomes is likely to be most important in the early stages of Saponaria's leaf development, when the epidermal tissue has not been completely differentiated [27].The application of histochemical reagents indicated mild reactions, mostly in the epidermal tissue and the glandular heads of the trichomes, for phenolic tannin precursors (DMB) (Figure 4c); polyphenols (FeCl 3 ) (Figure 4d); alkaloids (Dittmar) (Figure 4e); terpene-containing steroids (SbCl 3 ) (Figure 4f); flavonoids (vanillin) Life 2024, 14, 398 6 of 13 (Figure 4g); and alkaloids (Wagner) (Figure 4h).These metabolites are common in stressed plants, mainly in their leaves.All other histochemical reagents employed did not cause a reaction.
leaf age.This suggests that the functional role of trichomes is likely to be most important in the early stages of Saponaria s leaf development, when the epidermal tissue has not been completely differentiated [27].The application of histochemical reagents indicated mild reactions, mostly in the epidermal tissue and the glandular heads of the trichomes, for phenolic tannin precursors (DMB) (Figure 4c); polyphenols (FeCl3) (Figure 4d); alkaloids (Dittmar) (Figure 4e); terpene-containing steroids (SbCl3) (Figure 4f); flavonoids (vanillin) (Figure 4g); and alkaloids (Wagner) (Figure 4h).These metabolites are common in stressed plants, mainly in their leaves.All other histochemical reagents employed did not cause a reaction.The roots of S. jagelii appear to quickly prepare for the stressing conditions of the arid, salty environment.In-situ, the roots are very long and probably grow rapidly downwards to explore deep soil horizons for any signs of water.In cultured plants, after seed germination, it seems amazing how rapidly the root grows and how quickly it passes from the primary to the secondary structure.Very young roots, close to the root tip, demonstrate an elaborate woody structure (Figure 5a).Wide tracheary elements appear in the middle, while more thick-walled sclerenchyma cells appear to clasp the narrow vascular elements.The rays are few and uniseriate.The cells in the cortex are densely accommodated (Figure 5a).The whole structure of the root seems to help the plant penetrate the soil and overcome the difficulties in finding and transporting water.After a series of histochemical investigations, only alkaloids were detected in the root (Figure 5b-d).The roots of S. jagelii appear to quickly prepare for the stressing conditions of the arid, salty environment.In-situ, the roots are very long and probably grow rapidly downwards to explore deep soil horizons for any signs of water.In cultured plants, after seed germination, it seems amazing how rapidly the root grows and how quickly it passes from the primary to the secondary structure.Very young roots, close to the root tip, demonstrate an elaborate woody structure (Figure 5a).Wide tracheary elements appear in the middle, while more thick-walled sclerenchyma cells appear to clasp the narrow vascular elements.The rays are few and uniseriate.The cells in the cortex are densely accommodated (Figure 5a).The whole structure of the root seems to help the plant penetrate the soil and overcome the difficulties in finding and transporting water.After a series of histochemical investigations, only alkaloids were detected in the root (Figure 5b-d).
leaf age.This suggests that the functional role of trichomes is likely to be most important in the early stages of Saponaria s leaf development, when the epidermal tissue has not been completely differentiated [27].The application of histochemical reagents indicated mild reactions, mostly in the epidermal tissue and the glandular heads of the trichomes, for phenolic tannin precursors (DMB) (Figure 4c); polyphenols (FeCl3) (Figure 4d); alkaloids (Dittmar) (Figure 4e); terpene-containing steroids (SbCl3) (Figure 4f); flavonoids (vanillin) (Figure 4g); and alkaloids (Wagner) (Figure 4h).These metabolites are common in stressed plants, mainly in their leaves.All other histochemical reagents employed did not cause a reaction.The roots of S. jagelii appear to quickly prepare for the stressing conditions of the arid, salty environment.In-situ, the roots are very long and probably grow rapidly downwards to explore deep soil horizons for any signs of water.In cultured plants, after seed germination, it seems amazing how rapidly the root grows and how quickly it passes from the primary to the secondary structure.Very young roots, close to the root tip, demonstrate an elaborate woody structure (Figure 5a).Wide tracheary elements appear in the middle, while more thick-walled sclerenchyma cells appear to clasp the narrow vascular elements.The rays are few and uniseriate.The cells in the cortex are densely accommodated (Figure 5a).The whole structure of the root seems to help the plant penetrate the soil and overcome the difficulties in finding and transporting water.After a series of histochemical investigations, only alkaloids were detected in the root (Figure 5b-d).
Flower and Fruit Anatomy
The light-red to purple petals of the S. jagelii flowers (white arrow in Figure 6a) are surrounded by a hairy calyx (cyan arrow in Figure 6a).The sepals are lined with trichomes.Capitate trichomes are frequent on the abaxial side of the petals.They develop a large head, full of excreted material, which is probably a "call" to pollinators (red arrows in Figure 6b).Uniseriate, (multicellular) tapering, cuspidate defensive trichomes are abundant all over the abaxial side of the sepals as well as on their upper margins (Figure 6a, green arrows in Figure 6b).Numerous stomata of the anomocytic type can also be observed.Among the stomata complexes, a few appear to be diacytic (yellow arrows in Figure 6b,d).
large head, full of excreted material, which is probably a "call" to pollinators (red arrows in Figure 6b).Uniseriate, (multicellular) tapering, cuspidate defensive trichomes are abundant all over the abaxial side of the sepals as well as on their upper margins (Figure 6a, green arrows in Figure 6b).Numerous stomata of the anomocytic type can also be observed.Among the stomata complexes, a few appear to be diacytic (yellow arrows in Figure 6b,d).The fruit is a capsule (Figure 6c), 15-20 mm long, opening by four ascending, recurved teeth (carpels) (Figures 7a and 8a,c).It has a fleshy, hairy exocarp, a thin mesocarp and a thin-walled, hard endocarp (Figure 6c).Glandular pilate trichomes with multicellular stalks, accommodated on an elevated, rosette-like base, cover the whole surface of the exocarp (Figure 6b,d).Between these trichomes, numerous diacytic stomata can be observed (Figure 6d).The immature fruits are fully covered with pointed defensive hairs (Figure 7b).
Within the fruit (red rectangle in Figure 8a), the mature ovules ripen to kidney-shaped seeds (Figure 8b).Their dark-brown seed coats appear pebbled, lacking any appendages (Figure 8d).In both cross-and longitudinal sections, the seeds, immature (Figure 8c) or fully mature, are attached to an axile, free, central placenta (Figure 8d).
Habitat, Threats and Protection
The annual species of S. jagelii has a peculiar ecology.It is characterized, according to the "Vascular plants of Greece" [28], as a Therophyte, ["Annuals, completing their life cycle (sometimes several times) within one growing period, surviving the unfavorable period as seeds or seedlings (spring-green, summer-green or overwintering-green
Habitat, Threats and Protection
The annual species of S. jagelii has a peculiar ecology.It is characterized, according to the "Vascular plants of Greece" [28], as a Therophyte, ["Annuals, completing their life cycle (sometimes several times) within one growing period, surviving the unfavorable period as seeds or seedlings (spring-green, summer-green or overwintering-green
Habitat, Threats and Protection
The annual species of S. jagelii has a peculiar ecology.It is characterized, according to the "Vascular plants of Greece" [28], as a Therophyte, ["Annuals, completing their life cycle (sometimes several times) within one growing period, surviving the unfavorable period Life 2024, 14, 398 9 of 13 as seeds or seedlings (spring-green, summer-green or overwintering-green ephemerals")], while the habitat is described as coastal ("Marine waters and mudflats, salt marshes, sand dunes, littoral rocks, halo-nitrophilous scrub").It thrives on the coastal sand dunes of the small island of Elafonisos, located close to the southern coast of the Peloponnese mainland.The species was reported initially as "Endangered" [3] and then "Critically Endangered" (1998 and 2006) according to IUCN [4].It is not included in any international convention or national legislation.The plant is indirectly protected, as it falls within the Natura 2000 site GR2540002 "Periochi Neapolis kai Nisos Elafonisos".
Under the Köppen-Geiger climate classification, Elafonisos (longitude 22.9594983; latitude 36.4875509)features a hot-summer Mediterranean climate (Csa).Temperatures typically range between 12 • C and 28 • C throughout the year (Figure 9).On rare occasions, they can drop to 3 • C or rise to as high as 36 • C (−1.46% lower than Greece's averages).The island receives an average of 55.88 mm of precipitation and has about 104 rainy days (28.72% of the time) annually (Figure 9).Elafonisos enjoys an average of 4006 h of sunshine throughout the year, and daylight varies from 9 h 41 min to 14 h 35 min per day [29,30].In this arid, high-salinity environment, a tiny plant species struggles to survive against human disturbances and oncoming climate change.98 9 of 13 ephemerals")], while the habitat is described as coastal ("Marine waters and mudflats, salt marshes, sand dunes, littoral rocks, halo-nitrophilous scrub").It thrives on the coastal sand dunes of the small island of Elafonisos, located close to the southern coast of the Peloponnese mainland.The species was reported initially as "Endangered" [3] and then "Critically Endangered" (1998 and 2006) according to IUCN [4].It is not included in any international convention or national legislation.The plant is indirectly protected, as it falls within the Natura 2000 site GR2540002 "Periochi Neapolis kai Nisos Elafonisos".Under the Köppen-Geiger climate classification, Elafonisos (longitude 22.9594983; latitude 36.4875509)features a hot-summer Mediterranean climate (Csa).Temperatures typically range between 12 °C and 28 °C throughout the year (Figure 9).On rare occasions, they can drop to 3 °C or rise to as high as 36 °C (−1.46% lower than Greece s averages).The island typically receives an average of 55.88 mm of precipitation and has about 104 rainy days (28.72% of the time) annually (Figure 9).Elafonisos enjoys an average of 4006 h of sunshine throughout the year, and daylight varies from 9 h 41 min to 14 h 35 min per day [29,30].In this arid, high-salinity environment, a tiny plant species struggles to survive against human disturbances and oncoming climate change.Specifically, on the island of Elafonisos, S. jagelii was found only in two distinct regions.The distance between these two regions is about 2 km.The first region is easily accessed, since it is next to an often-visited beach, where tourists or locals can come across, "face to face", this critically endangered species.This means that, starting in May, when the number of tourists on the sandy beaches of Elafonisos is increasing rapidly, many of the individuals are probably trampled by cars, motorcycles, and off-road vehicles, subjected to littering by human tourists/visitors, etc.Indeed, during two consecutive years of our investigation and the visits in-situ (2022, 2023) [25,26,31], no individuals were found, in contrast to 2019 data [32].In this region, the population of the species is probably lost, which is the most pessimistic scenario, unless the seed bank remaining buried within the sand is mobilized on time, i.e., within a few years.Bearing in mind that the germination of the seeds occurs under dark conditions, these buried seeds may germinate at some time.During our last visit on 1 April, S. jagelli was found to be in mid-flowering season, while the full process of seed production and thus dispersion seems to be completed by the middle to the end of May, when the touristic activities on the island are starting to increase.
The second region, where the main population of fewer than 2000 individuals was found, is more isolated and not easily accessed.Unfortunately, on the sand, we can easily trace the output of touristic motorcycles.In this region, whose area in the formed polygon is less than 2000 m 2 , the population of the species still thrives.Interestingly, a small increase of 5% of the total population was recorded (compared to data from 2022 and 2023).Specifically, on the island of Elafonisos, S. jagelii was found only in two distinct regions.The distance between these two regions is about 2 km.The first region is easily accessed, since it is next to an often-visited beach, where tourists or locals can come across, "face to face", this critically endangered species.This means that, starting in May, when the number of tourists on the sandy beaches of Elafonisos is increasing rapidly, many of the individuals are probably trampled by cars, motorcycles, and off-road vehicles, subjected to littering by human tourists/visitors, etc.Indeed, during two consecutive years of our investigation and the visits in-situ (2022, 2023) [25,26,31], no individuals were found, in contrast to 2019 data [32].In this region, the population of the species is probably lost, which is the most pessimistic scenario, unless the seed bank remaining buried within the sand is mobilized on time, i.e., within a few years.Bearing in mind that the germination of the seeds occurs under dark conditions, these buried seeds may germinate at some time.During our last visit on 1 April, S. jagelli was found to be in mid-flowering season, while the full process of seed production and thus dispersion seems to be completed by the middle to the end of May, when the touristic activities on the island are starting to increase.
The second region, where the main population of fewer than 2000 individuals was found, is more isolated and not easily accessed.Unfortunately, on the sand, we can easily trace the output of touristic motorcycles.In this region, whose area in the formed polygon is less than 2000 m 2 , the population of the species still thrives.Interestingly, a small increase of 5% of the total population was recorded (compared to data from 2022 and 2023).This population polygon consists of two different substrates: one is the already-known coastal dunes, while the other substrate is composed of a really limited sandy area and an area mainly consisting of little pebbles.The mean tendency of the species is to remain as individuals rather than forming clusters; individuals are found in close contact (e.g., 2-3 cm).The distribution of individuals during the two visits was about the same; ~75% of the measured plants did not form clusters.The most interesting note is that during the visit in the second year, a major increase in the individual form was detected in the sandy area of the population polygon in areas closer to the sea.This is considered to be of great importance since lots of litter-mostly dead, semi-decomposed aquatic plants and as well as human-generated garbage, washed ashore by south/north-south winds-cover, many times, a part of the shore on which S. jagelii is thriving.This minor increase in geographic range in the areas closer to the sea, where a large number of individuals-not clusters-were detected due to the above-referenced litter, could result in the species being buried under it, posing some new questions about the future of this population.We have not provided any coordinates in order to protect the plant.
It is crucial to point out the direction of the wind in the region.The habitat is strongly affected by west and north-west winds.According to data obtained by the National Observatory of Athens (NOA), these directions prevail only in April, while from May to August, winds in the eastern direction are dominant in this area.Recent research [2] reporting the existence of the plant on Lemnos Island fails to give any information about the size of the population.
Finally, what seems to be of higher importance is that the island of Elafonisos has a rapidly developing tourist industry, accompanied by several so-called "recreational activities" on the beaches.The use of noisy off-road vehicles crossing the dunes and the thousands of visitors trampling the plants have proven to be a major threat, especially during the flowering period.This could result in a further rapid decline in the population and, eventually, cause the extinction of the species.
S. jagelii is being cultivated at the Seed Bank of the National and Kapodistrian University of Athens (Greece) and the "I.& A. N. Diomidis Botanical Garden" in Athens (Greece).
Conclusions
According to "The Red Data Book of Rare and Threatened Plants of Greece" [3], the small plant of S. jagelii, thriving in a remote corner of our planet, is facing extinction.We traced this plant and carefully recorded its phenology, as well as the attributes and the peculiarities of its micro-environment.Very sparingly, we collected seeds and the aboveground parts of two plants for further anatomical investigation.The seeds germinated, and the seedlings rapidly developed; their roots were also excised for microscopic investigation.
S. jagelii has evolved into a "smart" species concerning its current stress-escaping strategies.The leaves are well equipped for a xeromorphic life, yet they do not possess the typical structure that xerophytes have adapted to thrive in the high temperatures of arid areas of Greece.The leaf epidermis is thin, and the mesophyll is moderately compact, approaching the leaf structure of succulents rather than the leaf structure of xerophytes.The plant is not hypostomatic as most well-adapted Mediterranean species are.The roots appear prepared to explore deep soil horizons and transport water under highly unfavorable conditions.The season of flowering and seed dispersal runs ahead of the unfavorable season.This is also a major advantage, favoring the plant's ability to withstand a moderate tourist invasion during the peak summer season, since the fruit capsules ripen before the end of May.
The anatomical and ecophysiological observations discussed in the current research might also serve as a tool for exploring the potential of the plant to survive the constantly rising temperatures of climatic change.However, the species might not be prepared to face such rapid changes happening simultaneously in its environment.The highest plant germination rate was observed at as low as 10 • C, while the temperatures on the island range higher than that, between 12 • C and 28 • C, throughout the year (Figure 9), with a probability of an increase due to climate change.Currently, seed production and dispersion Life 2024, 14, 398 11 of 13 seem to occur by the middle to the end of May.The high temperatures will affect not only the germination of the species but also the survival of seedlings, as the threat of touristic activities will be higher, with the tourist season starting earlier in May-a tendency that is already noted in all Greek islands.Visitor arrivals on the island, even in the middle of the COVID-19 pandemic (2020 *, 2021 *), are given above in Table 1 [33].Considering all these pieces of information, and in agreement with the conservation actions suggested in the IUCN Red List of Threatened Species [4], we suggest the following actions be considered for the protection of the species.We strongly recommend that all vehicles be strictly prohibited from accessing the sand dunes, as this is crucial for the survival of the species.In addition, clearly marked obligate paths, with obvious information panels, must be established.Moreover, the sand beaches must be made inaccessible to visitors during the seed dispersal period of S. jagelii, i.e., from early May to early June.During the other months of the touristic period, from July to September, all activities in the area must be mild.After all, there are some other visit-worthy, fantastic sandy beaches on the island (e.g., Simos beach).Another idea focused on raising awareness of both local people and visitors is the creation of plant Micro-Reserves, a protected area with less than 20 ha surface [34].This continuous monitoring system could be a major tool for the conservation of this rare endemic species and a major help for preserving biodiversity in this NATURA2000 site [35,36].
Figure 1 .
Figure 1.(a) The broad distribution area of Saponaria jagelii is the island of Elafonisos (©Google Maps, Imagery©2024 Airbus, CNES); (b) an individual is visited by the specific pollinator Bombylius sp.; (c) Saponaria jagelii and Matthiola tricuspidata are common neighbors.
Figure 1 .
Figure 1.(a) The broad distribution area of Saponaria jagelii is the island of Elafonisos (©Google Maps, Imagery©2024 Airbus, CNES); (b) an individual is visited by the specific pollinator Bombylius sp.; (c) Saponaria jagelii and Matthiola tricuspidata are common neighbors.
Figure 3 .
Figure 3.The seed of S. jagelii.(a-d), light micrographs; (e-h), scanning electron micrographs.(a) A tangential section through the seed coat; (b) a cross-section of fresh material to demonstrate the seed coat layers; (c) a medial section through the endosperm and the embryo; (d) the same as (c) observed with polarized light to trace any crystalline structures within the embryo.(e) The anterior view of the seed; (f) the side view of the seed; (g) the rear view of the seed; (h) details of the hilum.
Figure 3 .
Figure 3.The seed of S. jagelii.(a-d), light micrographs; (e-h), scanning electron micrographs.(a) A tangential section through the seed coat; (b) a cross-section of fresh material to demonstrate the seed coat layers; (c) a medial section through the endosperm and the embryo; (d) the same as (c) observed with polarized light to trace any crystalline structures within the embryo.(e) The anterior view of the seed; (f) the side view of the seed; (g) the rear view of the seed; (h) details of the hilum.
Figure 3 .
Figure 3.The seed of S. jagelii.(a-d), light micrographs; (e-h), scanning electron micrographs.(a) A tangential section through the seed coat; (b) a cross-section of fresh material to demonstrate the seed coat layers; (c) a medial section through the endosperm and the embryo; (d) the same as (c) observed with polarized light to trace any crystalline structures within the embryo.(e) The anterior view of the seed; (f) the side view of the seed; (g) the rear view of the seed; (h) details of the hilum.
Figure 4 .
Figure 4. Leaf cross-sections: (a) free-hand section; (b) section of epoxy-embedded tissue stained with toluidine; arrows point to stomata; (c-h) application of histochemical reagents to fresh leaf sections; colored areas indicate positive reactions.
Figure 5 .
Figure 5. Root cross-sections and histochemistry: (a) section of epoxy-embedded tissue stained with toluidine blue; (b) fresh tissue stained with Dittmar stain for alkaloids; (c) fresh tissue, no reaction with SbCl3; (d) fresh tissue stained with Wagner, positive reaction for alkaloids.
Figure 4 .
Figure 4. Leaf cross-sections: (a) free-hand section; (b) section of epoxy-embedded tissue stained with toluidine; arrows point to stomata; (c-h) application of histochemical reagents to fresh leaf sections; colored areas indicate positive reactions.
Figure 4 .
Figure 4. Leaf cross-sections: (a) free-hand section; (b) section of epoxy-embedded tissue stained with toluidine; arrows point to stomata; (c-h) application of histochemical reagents to fresh leaf sections; colored areas indicate positive reactions.
Figure 5 .
Figure 5. Root cross-sections and histochemistry: (a) section of epoxy-embedded tissue stained with toluidine blue; (b) fresh tissue stained with Dittmar stain for alkaloids; (c) fresh tissue, no reaction with SbCl3; (d) fresh tissue stained with Wagner, positive reaction for alkaloids.
Figure 5 .
Figure 5. Root cross-sections and histochemistry: (a) section of epoxy-embedded tissue stained with toluidine blue; (b) tissue stained with Dittmar stain for alkaloids; (c) fresh tissue, no reaction with SbCl 3 ; (d) fresh tissue stained with Wagner, positive reaction for alkaloids.
Figure 6 .
Figure 6.The flower and the fruit of S. jagelii.(a) A whole mount of the flower.The cyan arrow points to the calyx, while the white arrow points to the petals; the capitate secretory trichomes can be observed on the surfaces of the sepals.(b) A part of the abaxial side of a sepal.The red arrows point to the peltate hair, the green ones to the uniseriate, cuspidate, defensive trichomes, and the yellow ones to stomata.(c) A scanning electron micrograph of a dissected fruit.The inner, funnelshaped part is the hard, rather woody endocarp.(d) The adaxial side of the exocarp demonstrating glandular pilate trichomes with multicellular stalks.The arrows point to stomata.
Figure 6 .
Figure 6.The flower and the fruit of S. jagelii.(a) A whole mount of the flower.The cyan arrow points to the calyx, while the white arrow points to the petals; the capitate secretory trichomes can be observed on the surfaces of the sepals.(b) A part of the abaxial side of a sepal.The red arrows point to the peltate hair, the green ones to the uniseriate, cuspidate, defensive trichomes, and the yellow ones to stomata.(c) A scanning electron micrograph of a dissected fruit.The inner, funnel-shaped part is the hard, rather woody endocarp.(d) The adaxial side of the exocarp demonstrating glandular pilate trichomes with multicellular stalks.The arrows point to stomata.
Figure 7 .
Figure 7.The flower, the leaf primordia and the immature fruit of S. jagelii.(a) A whole mount of the mature flower and the apical meristem.The slightly hairy petals and the short stamens, deep in the corolla, can be observed on the left side; the leaf primordia, covered with pointed defensive hairs and peltate glandular trichomes, are demonstrated on the right side of the figure.(b) A whole young fruit is observed with the stamens and the petals disorganized; the sepals, fully covered with pointed defensive hairs, are still tightly joined.
Figure 8 .
Figure 8.(a) An S. jagelii cluster of 3-4 individuals with naturally opened capsules (cyan circles): a capsule with seeds in it is indicated by the red square.(b) An open capsule: yellow arrows point to the seeds within the endocarp; the red arrow points to the remnants of the pistil.(c) Cross-sectioned capsules: immature seeds (iS) are indicated (left), while the carpels are marked with blue arrows on the empty capsule (right).(d) An axial, longitudinal section of the fruit.The cyan arrows point to the placenta; "mS" means "mature seeds".
Figure 7 . 13 Figure 7 .
Figure 7.The flower, the leaf primordia and the immature fruit of S. jagelii.(a) A whole mount of the mature flower and the apical meristem.The slightly hairy petals and the short stamens, deep in the corolla, can be observed on the left side; the leaf primordia, covered with pointed defensive hairs and peltate glandular trichomes, are demonstrated on the right side of the figure.(b) A whole young fruit is observed with the stamens and the petals disorganized; the sepals, fully covered with pointed defensive hairs, are still tightly joined.
Figure 8 .
Figure 8.(a) An S. jagelii cluster of 3-4 individuals with naturally opened capsules (cyan circles): a capsule with seeds in it is indicated by the red square.(b) An open capsule: yellow arrows point to the seeds within the endocarp; the red arrow points to the remnants of the pistil.(c) Cross-sectioned capsules: immature seeds (iS) are indicated (left), while the carpels are marked with blue arrows on the empty capsule (right).(d) An axial, longitudinal section of the fruit.The cyan arrows point to the placenta; "mS" means "mature seeds".
Figure 8 .
Figure 8.(a) An S. jagelii cluster of 3-4 individuals with naturally opened capsules (cyan circles): a capsule with seeds in it is indicated by the red square.(b) An open capsule: yellow arrows point to the seeds within the endocarp; the red arrow points to the remnants of the pistil.(c) Cross-sectioned capsules: immature seeds (iS) are indicated (left), while the carpels are marked with blue arrows on the empty capsule (right).(d) An axial, longitudinal section of the fruit.The cyan arrows point to the placenta; "mS" means "mature seeds".
Figure 9 .
Figure 9.A temperature/precipitation chart for the island of Elafonisos (mean values for 2017-2022), where the natural environment of S. jagelii is located.
Figure 9 .
Figure 9.A temperature/precipitation chart for the island of Elafonisos (mean values for 2017-2022), where the natural environment of S. jagelii is located.
Table 1 .
Visitor arrivals on the 2018 to 2022.The island can be accessed only by boat. | 2024-03-22T16:19:36.946Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "9fb5afb3539263daf5cabebcb0f049887e335b09",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/14/3/398/pdf?version=1710685094",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1847dea01a45b1aaf9c2755ed6ddca1adddae6b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
230617150 | pes2o/s2orc | v3-fos-license | An investigation into specialist practice nurses’ knowledge of cardiopulmonary resuscitation guidelines in a tertiary hospital in Gauteng Province, South Africa
Background Cardiac arrest is among the major causes of sudden deaths globally. Although out-of-hospital cardiac arrest occurs more commonly, in-hospital cardiac arrest is still a major health problem. Critical care areas provide care to critically ill patients who are at risk of cardiac arrest. It is important that nurses are knowledgeable and competent in cardiopulmonary resuscitation (CPR) in order to optimise the patient’s chances of survival and quality of life after cardiac arrest. Objectives To investigate specialist practice nurses’ knowledge of evidence-based guidelines for CPR Methods A descriptive cross-sectional survey was utilised. We sampled all critical care registered nurses (n=96) currently working in the adult emergency departments and intensive care units at Charlotte Maxeke Johannesburg Academic Hospital in Johannesburg, South Africa. A self-administered instrument, the ‘evaluation questionnaire on CPR knowledge for health personnel from emergency services’ was used. Data were analysed using descriptive and comparative statistics. Results The mean CPR knowledge score was 46%. A score of 84% was considered adequate for a pass, and no respondents achieved this score. The majority of the respondents (80.85%; n=76) were specialists in the field of intensive care nursing. Conclusion The CPR knowledge of specialist practice nurses was suboptimal for the care required in high-risk settings. Further training is indicated. Contributions of the study We showed that specialist nurses working in critical care environments at a public hospital in Johannesburg scored poorly in a CPR knowledge test.
ARTICLE
compressions, providing compressions of adequate rate and depth, avoiding leaning between compressions, and avoiding excessive ventilation. [6] Advanced level training in resuscitation is mostly aimed at healthcare professionals and skills learnt include rhythm recognition, defibrillation, cardioversion and airway management. [6] Burkhardt et al. [7] conducted a study to quantify the effect of knowledge on performance of all aspects of chest compressions, including rate, depth, recoil and hand positioning. They showed that respondents who identified effective compression rate to be >100 per minute performed chest compressions at a higher rate. On hand placement, respondents who knew proper hand placement performed a greater number of compressions. These findings suggested that knowledge of the guidelines had significant impact on CPR performance for at least some components of CPR.
In an in-hospital setting, nurses are often the first on the scene of cardiac arrest, initiating CPR as well as summoning assistance from the code team. It is therefore important that specialist nurses working in critical care areas are knowledgeable and competent in CPR to optimise the patient's chances of survival and quality of life. However, studies from across the globe report that nurses' knowledge of basic and advanced life support resuscitation guidelines are suboptimal. [18,20,21] A similar study conducted in Botswana confirmed that nurses' knowledge and skills related to CPR guidelines were poor. [8] Studies have shown that training influenced knowledge and performance of CPR. [9] According to Krajina et al. [9] theoretical knowledge on how to perform CPR is essential for the ability to perform it in practice, and nurses with good theoretical knowledge achieve better CPR performance. In agreement, Roshana et al. [10] showed that respondents who received some CPR training within 5 years obtained the highest mean score, and those who were involved in resuscitation frequently scored higher than those who were seldom or not involved in resuscitation. Certification on CPR should be renewed every 2 years. There is no legislative requirement for nurses to obtain BLS certification; however, healthcare establishments may choose to make it mandatory, and it is an assumed competency for a specialist practice nurse. [11] In view of nurses' inadequate knowledge of CPR, as reported previously in SA, [12] the objective of this study was to investigate specialist practice nurses' knowledge of evidence-based American Heart Association (AHA) resuscitation guidelines in a universityaffiliated public sector hospital in Johannesburg, with an intention of making recommendations for clinical practice and education of critical care nurses. The secondary objectives included determining the difference in demographic characteristics and CPR knowledge, and the relationship between nurses' post basic specialisation as well as years of nursing experience and CPR knowledge
Methods
A descriptive cross-sectional survey was utilised to assess the knowledge of critical care nurses of CPR guidelines.
The setting for this study included two adult emergency departments (Trauma and Medical) and five ICUs, each unit with 7 -12 beds, at Charlotte Maxeke Johannesburg Academic Hospital, a public sector hospital in Johannesburg. The trauma and emergency department is a level I public sector trauma unit that accepts direct cases in need of highly specialised care. The ICUs include trauma, cardiothoracic, coronary care, neurosurgery and multidisciplinary or general ICU. These are highly specialised critical care settings, which accept critically ill patients from both medical and surgical disciplines.
Population and sample
Specialist practice emergency (n=33) and intensive care (n=63) nurses working in the hospital were eligible for participation in this study. A total of 94 participants were recruited to the study, and the response rate to the convenience sampling method was 97%.
Data collection
A self-administered questionnaire was used. The 'evaluation questionnaire on CPR knowledge for health personnel from emergency services' , derived from the recommendations of the 2010 AHA guidelines, updated for CPR and adapted for use in Spain by Garcia et al., [13] was chosen as the most suitable instrument and was used with the permission of the authors. The tool was validated in a pilot study conducted by Garcia et al. [13] For this study, face and content validity were ensured by an expert review panel of specialist nurses in the fields of emergency nursing, critical care and nursing education. An organised review of the instrument's contents was undertaken, and since the tool was used to measure knowledge of the 2010 AHA guidelines for CPR and cardiovascular care, some contents of the questionnaire were modified to include the 2015 updates, and then the tool was reviewed by a panel of experts to ensure face and content validity. Data were collected between November 2016 and February 2017.
The questionnaire comprised 20 questions on CPR, each with 4 possible responses and 1 correct answer. The questionnaire had 6 questions focused on BLS, and the rest of the questions were focused on ALS. Among the questions, 7 referred specifically to aspects updated in the 2010 clinical practice guideline on CPR for BLS and ALS.
Data management and analyses
Data were captured and cleaned using an Excel spreadsheet (Microsoft Corp., USA). Statistical analysis was performed using STATA software, version 10 (STATA Corp., USA). Respondents had to obtain a score of 17 out of 20 (>85%) to be classified as having sufficient knowledge according to the 2010 AHA accreditation criteria.
Descriptive and comparative statistics were used to analyse the data. Percentage, mean and standard deviation were used to describe respondents' demographic data and nurses' theoretical knowledge. The significant difference between mean CPR knowledge score and age, gender, academic qualification, area of post-basic specialisation, years of experience and life support courses attended were established using Student's t-test. A χ 2 test and linear regression models (univariate and multivariate) were computed to determine associations between specialist practice nurses' qualification as well as their years of experience and their knowledge of the CPR guidelines.
Results
Almost half of the respondents (46.8%; n=44) were between the ages of 41 and 50 years, 82.98% (n=78) were female and 80.85% (n=76) were specialised in the field of intensive care nursing ( Table 1).
The mean (standard deviation (SD)) CPR knowledge score of the 94 respondents was 46% (12.71). This result indicated that none of the respondents passed the CPR knowledge test with a score above 85%.
The majority of the respondents (80.9%; n=76) had post-basic training in intensive care nursing, 95.7% (n=90) had trained in BLS and 46.8% (n=44) had been trained more than 2 years ago.
ARTICLE
We found no significant difference between CPR knowledge score and age, gender, academic qualification, area of post-basic specialisation, years of experience and life support courses attended by the respondents (Table 2). There was little variation in the mean CPR knowledge scores between the different age groups. The male respondents had a slightly higher mean CPR knowledge score (47.9%) than the female respondents (45.77%).
The type of academic qualification (degree or diploma) showed no variation in mean CPR score, as both groups averaged 46%.
Although emergency care nurses had a slightly higher mean CPR score (48%) compared with the intensive care nurses (45%), both groups had suboptimal scores.
There was no significant difference in mean CPR score between respondents with >10 years of nursing experience (58%) from nurseswith <10 years nursing experience (55%). Similarly, no significant difference was found between intensive care nurses' and trauma and emergency nurses' scores in the CPR knowledge test using univariate analysis (p=0451). This was also true for univariate analysis of scores between specialist nurses with >10 years' experience compared with those with <10 years' experience (p=0314). Finally, we found no association between post-basic specialisation, years of nursing experience and the score in the CPR knowledge test using a multilinear regression model (Table 3).
Discussion
Respondents in this study represented a range of experience and ages for specialist practice nurses in a SA academic hospital. The majority had completed BLS courses, but not ALS courses. This may account in
ARTICLE
some respect for the suboptimal mean CPR knowledge score of 46% (12.71), because 14 of the 20 questions in the questionnaire are related to ALS. However, it is interesting to note that even respondents with ALS training obtained suboptimal scores. The mean CPR score obtained in this study was higher than the 11% that was previously reported in a SA study that sampled all nurses and included questions only on BLS. [12] While our results were similar to those reported for nurses in other developing countries including Botswana, [7] where nurses obtained a mean CPR knowledge score of 55%, and Bahrain, [14] where nurses achieved a mean score of 42%, these studies included all registered nurses. In Greece, [15] nurses specialised in critical care obtained a mean score of 49.5%. While knowledge retention is a major contributor to the inadequate scores, it is not known whether nurses are exposed to opportunities to refresh both knowledge and skills practically or educationally.
The scores for nurses in this study were better than those of doctors at a SA tertiary hospital who scored 41% for adult CPR and 37% for paediatric CPR. [16] Moreover, a study to determine competency of CPR among intermediate qualified SA emergency medical service personnel found that the median knowledge score was 50% and median skill score was 33%, indicating knowledge and skill performance in CPR below standard in critical care health personnel. [17] Responders to patients suffering sudden cardiac arrest include various members of the multidisciplinary team, and therefore all members should be required to stay updated.
Nurses' inadequate theoretical knowledge of CPR has been attributed to ineffective initial or refresher training. Instructor competence, teaching methods, poor recall of knowledge and infrequency of updates are other reasons that influence poor CPR knowledge among nurses. Bukiran et al. [18] and Marzooq and Lyneham [14] attributed suboptimal CPR knowledge to the lack of motivation to update knowledge, which was reported by 1.7% of their respondents, while 14% blamed lack of institutional guidance.
Nearly half of our respondents had expired BLS certificates, while two-thirds had not renewed their ALS certificates for more than 24 months. This illustrates that nurses are not updating or renewing their certification of life support courses. The percentage of respondents who had renewed their BLS certificate in the last 12 months in our study was higher than the 1.3% (n=1) reported by Passali et al. [14] in Greece, where the mean time elapsed since last BLS course training was 15 years. Often, it is an expectation of nurses working in a public facility that training be provided for them by the institution. However, there is an apparent lack of intrinsic motivation for self-development, and responsibility is placed on management of the health institution.
Previous studies have shown that knowledge of the CPR guidelines was significantly associated with academic qualification. [19,14] In fact, a study conducted by Al-Janabi and Al-Ani [20] found that 74.3% of nurses with degree qualifications scored above the cut-off point on CPR knowledge in Baghdad, compared with 22.9% of those with a diploma. We did not find a significant relationship between type of qualification and mean CPR scores. This may be due to the fact that undergraduate and post-basic nursing education do not include BLS, and therefore do not affect this specific knowledge after qualification.
Although there was no significant difference between demographic characteristics and CPR knowledge when comparing the trauma and emergency nurses and intensive care trained nurses, the former appear to have a slightly higher mean score (48%) compared with the latter (45%), and nurses with >10 years of nursing experience scored slightly higher (58%) than nurses with <10 years (55%). These results differ from those of Parajulee and Selvaraj, [21] who reported significant associations between CPR knowledge, years of working experience and work area. Their study included nurses working in a teaching hospital, so their results may not be comparable to our findings.
Study limitations and recommendations
Studies on CPR knowledge and skill retention report a decline in knowledge level over time, with poor performance 3 -6 months post training. Hence it is important for nurses to undergo frequent training on CPR knowledge to keep up to date with all updates from the AHA. This requirement can be incentivised through the continuous professional development (CPD) points system, where BLS and ALS training can form part of recognised CPD programmes. The AHA certificate courses including BLS are valid for 2 years, and it is recommended that specialist practice nurses renew their certificates every 2 years.
CPD plans for nurses should include mandatory updates of vital knowledge and skills such as those included in the BLS course.
Although the literature suggests that a relationship exists between knowledge and practice, the focus of this study was limited to the area of knowledge, and therefore may not accurately reflect actual practice or skill in performing CPR. Another limitation of this study is that the data were collected from only one institution. Face and content validity of the instrument were ensured by an expert review panel with the assistance of the original authors; however, the reliability coefficient of the instrument was not reported by the original authors.
Conclusion
Specialist nurses scored poorly on the CPR knowledge test, indicating their suboptimal knowledge. Therefore, we recommend that nurses undergo frequent training on CPR knowledge to keep up to date with all the updates from AHA.
Declaration. This study was conducted in partial fulfilment of a Master of Science degree in Nursing. | 2021-01-06T05:02:13.805Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "328cd673587d7468781859eb43386c9e9a4324bb",
"oa_license": "CCBYNC",
"oa_url": "http://www.sajcc.org.za/index.php/SAJCC/article/download/412/379",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "16359470b0aeff31c4b1dd8d81dd110f033c6245",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8615817 | pes2o/s2orc | v3-fos-license | Dominating the Erdos-Moser theorem in reverse mathematics
The Erdos-Moser theorem (EM) states that every infinite tournament has an infinite transitive subtournament. This principle plays an important role in the understanding of the computational strength of Ramsey's theorem for pairs (RT^2_2) by providing an alternate proof of RT^2_2 in terms of EM and the ascending descending sequence principle (ADS). In this paper, we study the computational weakness of EM and construct a standard model (omega-model) of simultaneously EM, weak K\"onig's lemma and the cohesiveness principle, which is not a model of the atomic model theorem. This separation answers a question of Hirschfeldt, Shore and Slaman, and shows that the weakness of the Erdos-Moser theorem goes beyond the separation of EM from ADS proven by Lerman, Solomon and Towsner.
Introduction
Reverse mathematics is a mathematical program whose goal is to classify theorems in terms of their provability strength. It uses the framework of subsystems of second-order arithmetic, with the base theory RCA 0 , standing for Recursive Comprehension Axiom. RCA 0 is composed of the basic first-order Peano axioms, together with ∆ 0 1 -comprehension and Σ 0 1 -induction schemes. RCA 0 is usually thought of as capturing computational mathematics. This program led to two important observations: First, most "ordinary" (i.e. non set-theoreric) theorems require only very weak set existence axioms. Second, many of those theorems are actually equivalent to one of five main subsystems over RCA 0 , known as the "Big Five" [21].
However, Ramsey theory is known to provide a large class of theorems escaping this phenomenon. Indeed, consequences of Ramsey's theorem for pairs (RT 2 2 ) usually belong to their own subsystem. Therefore, they received a lot of attention from the reverse mathematics community [3,13,14,27]. This article focuses on Ramseyan principles below the arithmetic comprehension axiom (ACA). See Soare [29] for a general introduction to computability theory, and Hirschfeldt [11] for a good introduction to the reverse mathematics below ACA.
Cohesiveness
Cohesiveness is a statement playing a central role in the analysis of Ramsey's theorem for pairs [3]. It can be seen as a sequential version of Ramsey's theorem for singletons and admits characterizations in terms of degrees whose jump computes a path through a Π 0,∅ ′ 1 class [16]. The decomposition of RT 2 2 in terms of COH and stable Ramsey's theorem for pairs (SRT 2 2 ) has been reused in the analysis of many consequences of Ramsey's theorem [14]. The link between cohesiveness and SRT 2 2 is still an active research subject [4,8,12,26].
Definition 1.1 (Cohesiveness) An infinite set C is R-cohesive for a sequence of sets R = R 0 , R 1 , . . . if for each i ∈ ω, C ⊆ * R i or C ⊆ * R i . A set C is p-cohesive if it is R-cohesive where R is an enumeration of all primitive recursive sets. COH is the statement "Every uniform sequence of sets R has an R-cohesive set." Jockusch and Stephan [16] studied the degrees of unsolvability of cohesiveness and proved that COH admits a universal instance whose solutions are the p-cohesive sets. They characterized their degrees as those whose jump is PA relative to ∅ ′ . The author extended this analysis to every computable instance of COH and studied their degrees of unsolvability [26]. Cholak, Jockush and Slaman [3] proved that RT 2 2 is computably equivalent to SRT 2 2 + COH. Mileti [20] and Jockusch and Lempp [unpublished] formalized this equivalence over RCA 0 . Hirschfeldt, On the lower bound side, Hirschfeldt, Shore and Slaman [15] proved that AMT implies the omitting partial type theorem (OPT) over RCA 0 . Hirschfeldt and Greenberg, and independently Day, Dzhafarov and Miller, strengthened this result by proving that AMT implies the finite intersection property (FIP) over RCA 0 (see [11]). The principle FIP was first introduced by Dzhafarov and Mummert [9]. Later, Downey, Diamondstone, Greenberg and Turetsky [7] and Cholak, Downey and Igusa [2] proved that FIP is equivalent to the principle asserting, for every set X, the existence of a 1-generic relative to X. In particular, every model of AMT contains 1-generic reals.
The computable analysis of the atomic model theorem revealed the genericity flavor of the statement. More precisely, the atomic model theorem admits a pure computability-theoretic characterization in terms of hyperimmunity relative to a fixed ∆ 0 2 function. Definition 1.4 (Escape property) For every ∆ 0 2 function f , there exists a function g such that f (x) < g(x) for infinitely many x.
The escape property is a statement in between hyperimmunity relative to ∅ ′ and hyperimmunity. The atomic model theorem is computably equivalent to the escape property, that is, for every complete atomic theory T , there is a ∆ 0,T 2 function f such that for every function g satisfying the escape property for f , T ⊕ g computes an atomic model of T . Conversely, for every ∆ 0 2 approximationf of a function f , there is af -computable complete atomic theory such that for every atomic model M,f ⊕ M computes a function satisfying the escape property for f . In particular, the ω-models satisfying AMT are exactly the ones satisfying the escape property. However the formalization of this equivalence requires more than the Σ 0 1 induction scheme. It was proven to hold over RCA 0 + IΣ 0 2 but not RCA 0 + BΣ 0 2 [15,5], where IΣ 0 2 and BΣ 0 2 are respectively the Σ 0 2 induction scheme and the Σ 0 2 bounding scheme. Hirschfeldt, Shore and Slaman [15] asked the following question. Note that AMT is not computably reducible to COH, since there exists a cohesive set of minimal degree [6], and a computable atomic theory whose computable atomic models bound 1-generic reals [11], but no minimal degree bounds a 1-generic real [32].
In this paper, we answer this question negatively. We shall take advantage of the characterization of AMT by the escape property to create an ω-model M of EM, WKL and COH simultaneously, together with a ∆ 0 2 function f dominating every function in M. Therefore, any ∆ 0 2 approximationf of the function f is a computable instance of the escape property belonging to M, but with no solution in M. The function f witnesses in particular that M |= AMT. Our main theorem is the following. The proof techniques used to prove the main theorem will be introduced progressively by considering first computable non-reducibility, and then generalizing the diagonalization to Turing ideals by using an effective iterative forcing.
Definitions and notation
String, sequence. Fix an integer k ∈ ω. A string (over k) is an ordered tuple of integers a 0 , . . . , a n−1 (such that a i < k for every i < n). The empty string is written ε. A sequence (over k) is an infinite listing of integers a 0 , a 1 , . . . (such that a i < k for every i ∈ ω). Given s ∈ ω, k s is the set of strings of length s over k and k <s is the set of strings of length < s over k. Similarly, k <ω is the set of finite strings over k and k ω is the set of sequences (i.e. infinite strings) over k. Given a string σ ∈ k <ω , we denote by |σ| its length. Given two strings σ, τ ∈ k <ω , σ is a prefix of τ (written σ τ ) if there exists a string ρ ∈ k <ω such that σρ = τ . Given a sequence X, we write σ ≺ X if σ = X↾n for some n ∈ ω, where X↾n denotes the restriction of X to its first n elements. A binary string is a string over 2. A real is a sequence over 2. We may identify a real with a set of integers by considering that the real is its characteristic function.
Tree, path. A tree T ⊆ k <ω is a set downward-closed under the prefix relation. A binary tree is a tree T ⊆ 2 <ω . A sequence P ∈ k ω is a path though T if for every σ ≺ P , σ ∈ T . A string σ ∈ k <ω is a stem of a tree T if every τ ∈ T is comparable with σ. Given a tree T and a string σ ∈ T , we denote by T [σ] the subtree {τ ∈ T : τ σ ∨ τ σ}.
Sets, partitions. Given two sets A and B, we denote by A < B the formula (∀x ∈ A)(∀y ∈ B)[x < y] and by A ⊆ * B the formula (∀ ∞ x ∈ A)[x ∈ B], meaning that A is included in B up to finitely many elements. Given a set X and some integer k, a k-cover of X is a k-uple A 0 , . . . , A k−1 such that A 0 ∪ · · · ∪ A k−1 = X. We may simply say k-cover when the set X is unambiguous. A k-partition is a k-cover whose sets are pairwise disjoint. A Mathias condition is a pair (F, X) where F is a finite set, X is an infinite set and F < X. A condition (F 1 , X 1 ) extends (F, X) (written (F 1 , X 1 ) ≤ (F, X)) if F ⊆ F 1 , X 1 ⊆ X and F 1 F ⊂ X. A set G satisfies a Mathias condition (F, X) if F ⊂ G and G F ⊆ X. We refer the reader to Chapter 2 in Hirschfeldt [11] for a gentle introduction to effective forcing.
The weakness of cohesiveness under computable reducibility
Before proving that COH does not imply AMT over RCA 0 , we illustrate the key features of our construction by showing that AMT does not reduce to COH in one step. This onestep reducibility is known as computable reducibility [8,12,26]. The general construction will consist of iterating this one-step diagonalization to construct a Turing ideal whose functions are dominated by a single ∆ 0 2 function.
Definition 2.1 (Computable reducibility) A principle P is computably reducible to another principle Q (written P ≤ c Q) if every P-instance I computes a Q-instance J such that for every solution X to J, X ⊕ I computes a solution to I.
The remainder of this section is devoted to the proof of the following theorem.
Theorem 2.2 AMT ≤ c COH
In order to prove Theorem 2.2, we need to construct a ∆ 0 2 function f such that for every uniformly computable sequence of sets R = R 0 , R 1 , . . . , there is an R-cohesive set G such that every G-computable function is dominated by f . Thankfully, Jockusch and Stephan [16] proved that for every such sequence of sets R, every p-cohesive set computes an infinite R-cohesive set. The sequence of all primitive recursive sets is therefore called a universal instance. Hence we only need to build a ∆ 0 2 function f and a p-cohesive set G such that every G-computable function is dominated by f to obtain Theorem 2.2.
Given some uniformly computable sequence of sets R = R 0 , R 1 , . . . , the usual construction of an R-cohesive set G is done by a computable Mathias forcing. The forcing conditions are pairs (F, X), where F is a finite set representing the finite approximation of G and X is an infinite, computable reservoir such that max(F ) < min(X). The construction of the R-cohesive set is obtained by building an infinite, decreasing sequence of Mathias conditions, starting with (∅, ω) and interleaving two kinds of steps. Given some condition (F, X), (S1) the extension step consists of taking an element x from X and adding it to F , thereby forming the extension (F ∪ {x}, X [0, x]); (S2) the cohesiveness step consists of deciding which one of X ∩ R i and X ∩ R i is infinite, and taking the chosen one as the new reservoir.
The first step ensures that the constructed set G will be infinite, whereas the second step makes the set G R-cohesive. Looking at the effectiveness of the construction, the step (S1) is computable, assuming we are given some Turing index of the set X. The step (S2), on the other hand, requires to decide which one of two computable sets is infinite, knowing that at least one of them is. This decision requires the computational power of a PA degree relative to ∅ ′ (see [3,Lemma 4.2]). Since we want to build a ∆ 0 2 function f dominating every G-computable function, we would like to make the construction of G ∆ 0 2 . Therefore the step (S2) has to be revised.
Effectively constructing a cohesive set
The above construction leads to two observations. First, at any stage of the construction, the reservoir X of the Mathias condition (F, X) has a particular shape. Indeed, after the first application of stage (S2), the set X is, up to finite changes, of the form ω ∩ R 0 or ω ∩ R 0 . After the second application of (S2), it is in one of the following forms: ω ∩ R 0 ∩ R 1 , ω ∩ R 0 ∩ R 1 , ω ∩R 0 ∩R 1 , ω ∩R 0 ∩R 1 , and so on. More generally, given some string σ ∈ 2 <ω , we can define R σ inductively as follows: First, R ε = ω, and then, if R σ has already been defined for some string σ of length i, R σ0 = R σ ∩ R i and R σ1 = R σ ∩ R i . By the first observation, we can replace Mathias conditions by pairs (F, σ), where F is a finite set and σ ∈ 2 <ω . The pair (F, σ) denotes the Mathias condition (F, can be reformulated as choosing, given some valid condition (F, σ), which one of (F, σ0) and (F, σ1) is valid.
Second, we do not actually need to decide which one of R σ0 and R σ1 is infinite assuming that R σ is infinite. Our goal is to dominate every G-computable function with a ∆ 0 2 function f . Therefore, given some G-computable function g, it is sufficient to find a finite set S of candidate values for g(x) and make f (x) be greater than the maximum of S. Instead of choosing which one of R σ0 and R σ1 is infinite, we will explore both cases in parallel. The step (S2) will split some condition (F, σ) into two conditions (F, σ0) and (F, σ1). Our new forcing conditions are therefore tuples (F σ : σ ∈ 2 n ) which have to be thought of as 2 n parallel Mathias conditions (F σ , σ) for each σ ∈ 2 n . Note that (F σ , σ) may not denote a valid Mathias condition in general since R σ may be finite. Therefore, the step (S1) becomes ∆ 0 2 , since we first have to check whether R σ is non-empty before picking an element in R σ . The whole construction is ∆ 0 2 and yields a ∆ 0 2 infinite binary tree T . In particular, any degree PA relative to ∅ ′ bounds an infinite path though T and therefore bounds a G-cohesive set. However, the degree of the set G is not sensitive in our argument. We only care about the effectiveness of the tree T .
Dominating the functions computed by a cohesive set
We have seen in the previous section how to make the construction of a cohesive set more effective by postponing the choices between forcing G ⊆ * R i and G ⊆ * R i to the end of the construction. We now show how to dominate every G-computable function for every infinite path G through the ∆ 0 2 tree constructed in the previous section. To do this, we will interleave a third step deciding whether Φ G e (n) halts, and if so, collecting the candidate values of Φ G e (n). Given some Mathias precondition (F, X) (a precondition is a condition where we do not assume that the reservoir is infinite) and some e, x ∈ ω, we can ∆ 0 2 -decide whether there is some set E ⊆ X such that Φ F ∪E e (x) ↓. If this is the case, then we can effectively find this a finite set E ⊆ X and compute the value Φ F ∪E e (x). If this is not the case, then for every infinite set G satisfying the condition (F, X), the function Φ G e will not be defined on input x. In this case, our goal is vacuously satisfied since Φ G e will not be a function and therefore we do not need do dominate Φ G e . Let us go back to the previous construction. After some stage, we have constructed a condition (F σ : σ ∈ 2 n ) inducing a finite tree of depth n. The step (S3) acts as follows for some x ∈ ω: (S3) Let S = {0}. For each σ ∈ 2 n and each e ≤ x, decide whether there is some finite If this is the case, add the value of Φ Fσ∪E e (x) to S and setF σ = F σ ∪ E, otherwise setF σ = F σ . Finally, set f (x) = max(S) + 1 and take (F σ : σ ∈ 2 n ) as the next condition. Note that the step (S3) is ∆ 0 2 -computable uniformly in the condition (F σ : σ ∈ 2 n ). The whole construction therefore remains ∆ 0 2 and so does the function f . Moreover, given some G-computable function g, there is some Turing index e such that Φ G e = g. For each x ≥ e, the step (S3) is applied at a finite stage and decides whether Φ G e (x) halts or not for every set satisfying one of the leaves of the finite tree. In particular, this is the case for the set G and . Therefore f dominates the function g.
The formal construction
Let R = R 0 , R 1 , . . . be the sequence of all primitive recursive sets. We define a ∆ 0 2 decreasing sequence of conditions (∅, ε) ≥ c 0 ≥ c 1 . . . such that for each s ∈ ω For every e ≤ s and every σ ∈ 2 s , either Φ F s σ e (s) ↓ or Φ G e (s) ↑ for every set G satisfying (F s σ , R σ ). Let P be a path through the tree T = {σ ∈ 2 <ω : R σ is infinite} and let G = s F s P ↾s . By (i), for each s ∈ ω, |F s P ↾s | ≥ s since R P ↾s is infinite. Therefore the set G is infinite. Moreover, for each s ∈ ω, the set G satisfies the condition (F s+1 P ↾s+1 , . The function f is ∆ 0 2 . We claim that it dominates every G-computable function. Fix some e such that Φ G e is total. For every s ≥ e, let σ = P ↾ s.
. Therefore f dominates the function Φ G e . This completes the proof of Theorem 2.2.
The weakness of EM under computable reducibility
We now strengthen the analysis of the previous section by proving that the atomic model theorem is not computably reducible to the Erdős-Moser theorem. Theorem 2.2 is an immediate consequence of this result since [AMT ∨ COH] ≤ c EM (see [25]). After this section, we will be ready to iterate the construction in order to build an ω-model of EM ∧ COH which is not a model of AMT.
Theorem 3.1 AMT ≤ c EM Before proving Theorem 3.1, we start with an analysis of the combinatorics of the Erdős-Moser theorem. Just as we did for cohesiveness, we will show how to build solutions to EM through ∆ 0 2 constructions, postponing the Π 0 2 choices to the end.
The combinatorics of the Erdős-Moser theorem
The standard way of building an infinite object by forcing consists of defining an increasing sequence of finite approximations, and taking the union of them. Unlike COH where every finite set can be extended to an infinite cohesive set, some finite transitive subtournaments may not be extensible to an infinite one. We therefore need to maintain some extra properties which will guarantee that the finite approximations are extendible. The nature of these properties constitue the core of the combinatorics of EM.
Lerman, Solomon and Towsner [19] proceeded to an analysis of the Erdős-Moser theorem. They showed in particular that it suffices to ensure that the finite transitive subtournament F has infinitely many one-point extensions, that is, infinitely many elements x such that F ∪ {x} is transitive, to extend F to an infinite transitive subtournament (see [19,Lemma 3.4]). This property is sufficient to add elements one by one to the finite approximation. However, when adding elements by block, we shall maintain a stronger invariant. We will require that the reservoir is included in a minimal interval of the finite approximation F . In this section, we reintroduce the terminology of Lerman, Solomon and Towsner [19] and give a presentation of the combinatorics of the Erdős-Moser theorem motivated by its computational analysis. Definition 3.2 (Minimal interval) Let R be an infinite tournament and a, b ∈ R be such that R(a, b) holds. The interval (a, b) is the set of all x ∈ R such that R(a, x) and R(x, b) hold. Let F ⊆ R be a finite transitive subtournament of R. For a, b ∈ F such that R(a, b) holds, we say that (a, b) is a minimal interval of F if there is no c ∈ F ∩ (a, b), i.e., no c ∈ F such that R(a, c) and R(c, b) both hold.
Fix a computable tournament R, and consider a pair (F, X) where (i) F is a finite R-transitive set representing the finite approximation of the infinite Rtransitive subtournament we want to construct (ii) X is an infinite set disjoint from F , included in a minimal interval of F and such that F ∪ {x} is R-transitive for every x ∈ X. In other words, X is an infinite set of one-point extensions. Such a set X represents the reservoir, that is, a set of candidate elements we may add to F later on. The infinite set X ensures extensibility of the finite set F into an infinite R-transitive subtournament. Indeed, by applying the Erdős-Moser theorem to R over the domain X, there exists an infinite R-transitive subtournament H ⊆ X. One easily checks that F ∪ H is Rtransitive. The pair (F, X) is called an Erdős-Moser condition in [23]. A set G satisfies an EM condition (F, X) if it is R-transitive and satisfies the Mathias condition (F, X). In order to simplify notation, given a tournament R and two sets E and F , we denote by E → R F the formula (∀x ∈ E)(∀y ∈ F )R(x, y).
Suppose now that we want to add a finite number of elements of X into F to obtain a finite T -transitive setF ⊇ F , and find an infinite subsetX ⊆ X such that (F ,X) has the above mentioned properties. We can do this in a few steps: 1. Choose a finite (not necessarily R-transitive) set E ⊂ X. 2. Any element x ∈ X E induces a 2-partition E 0 , E 1 of E by setting E 0 = {y ∈ E : R(y, x)} and E 1 = {y ∈ E : R(x, y)}. Consider the coloring f which associates to any element of X E the corresponding 2-partition E 0 , E 1 of E. 3. As E is finite, there exists finitely many 2-partitions of E, so f colors each element of X E into finitely many colors. By Ramsey's theorem for singletons applied to f , there exists a 2-partition E 0 , E 1 of E together with an infinite subsetX ⊆ X E such that for every x ∈X, f (x) = E 0 , E 1 . By definition of f and E i , E 0 → RX → R E 1 . 4. Take any R-transitive subset F 1 ⊆ E i for some i < 2 and setF = F ∪ F 1 . The pair (F ,Ỹ ) satisfies the required properties (see [23,Lemma 5.9] for a proof). From a computational point of view, if we start with a computable condition (F, X), that is, where X is a computable set, we end up with a computable extension (F ,Ỹ ). Remember that our goal is to define a ∆ 0 2 function f which will dominate every G-computable function for some solution G to R. For this, we need to be able to ∅ ′ -decide whether Φ G e (n) ↓ or Φ G e (n) ↑ for every solution G to R satisfying some condition (F, X). More generally, given some Σ 0 1 formula ϕ, we focus on the computational power required to decide a question of the form Q1: Is there an R-transitive extensionF of F in X such that ϕ(F ) holds?
Trying to apply naively the algorithm above requires a lot of computational power. In particular, step 3 requires to choose a true formula among finitely many Π 0,X 2 formulas. Such a step needs the power of PA degree relative to the jump of X. We shall apply the same trick as for cohesiveness, consisting in not trying to choose a true Π 0,X 2 formula, but instead parallelizing the construction. Given a finite set E ⊂ X, instead of finding an infinite subsetỸ ⊂ X E whose members induce a 2-partition of E, we will construct as many extensions of (F, X) as there are 2-partitions of E. The question now becomes Q2: Is there a finite set E ⊆ X such that for every 2-partition E 0 , E 1 of E, there exists an This question is Σ 0,X 1 , which is good enough for our purposes. If the answer is positive, we will try the witness F 1 associated to each 2-partition of E in parallel. Note that there may be but this is not a problem since there is at least one good 2-partition such that the corresponding set is infinite. The whole construction yields again a tree of pairs (F, X).
If the answer is negative, we want to ensure that ϕ(F ) will not hold at any further stage of the construction. For each n ∈ ω, let H n be the set of the n first elements of X. Because the answer is negative, for each n ∈ ω, there exists a 2-partition E 0 , E 1 of H n such that for every R-transitive subset F 1 ⊆ E i for any i < 2, ϕ(F ∪ F 1 ) does not hold. Call such a 2-partition an avoiding partition of H n . Note that if E 0 , E 1 is an avoiding partition of H n+1 , then E 0 ↾n, E 1 ↾n is an avoiding partition of H n . So the set of avoiding 2-partitions of some H n forms an infinite tree T . Moreover, the predicate " E 0 , E 1 is an avoiding partition of H n " is ∆ 0,Hn 1 so the tree T is ∆ 0,X 1 . The collection of the infinite paths through T forms a non-empty Π 0,X 1 class C defined as the collection of 2-partitions Z 0 ∪ Z 1 = X such that for every i < 2 and every R-transitive subset The natural next step would be to apply weak König's lemma to obtain a 2-partition of X such that for every finite R-transitive subset F 1 of any of its parts, ϕ(F ∪ F 1 ) does not hold. By the low basis theorem, we could take the 2-partition to be low over X and the whole construction would remain ∆ 0 2 . However, when iterating the construction, we will be given only finite pieces of tournaments since the tournament may depend on an oracle being constructed at a previous iteration. In this setting, it will be impossible to compute a member of the Π 0,X 1 class C of 2-partitions, since we will have access to only a finite piece of the corresponding tree T . In order to get progressively prepared to the iterated forcing, we will not apply WKL and will work with Π 0 1 classes of 2-partitions. Therefore, if the answer is negative, we duplicate the finite R-transitive F into two sets F 0 = F 1 = F , and commit F i to take from now on its next elements from X i for some 2-partition X 0 ∪ X 1 = X belonging to the Π 0 1 class C of 2-partitions witnessing the negative answer. Iterating the process by asking several questions leads to tuples (F 0 , . . . , F k−1 , C) where F i is a finite R-transitive set taking its elements from the ith part of the class C of k-partitions. This notion of forcing will be defined formally in a later section.
Enumerating the computable infinite tournaments
Proving that some principle P does not computably reduce to Q requires to create a Pinstance X such that every X-computable Q-instance has a solution Y such that Y ⊕X does not compute a solution to X. In the case of AMT ≤ c COH, we have been able to restrict ourselves to only one instance of COH, since Jockusch and Stephan [16] showed it admits a universal instance. It is currently unknown whether the Erdős-Moser theorem admits a universal instance, that is, a computable infinite tournament such that for every infinite transitive subtournament H and for every computable infinite tournament T , H computes an infinite transitive T -subtournament. See [23] for an extensive study of the existence of universal instances for principles in reverse mathematics.
Since we do not know whether EM admits a universal instance, we will need to diagonalize against the solutions to every computable EM-instance. In fact, we will prove a stronger result. We will construct a ∆ 0 2 function f and an infinite set G which is eventually transitive simultaneously for every computable infinite tournament, and such that f dominates every G-computable function. There exists no computable sequence of sets containing all computable sets. Therefore it is not possible to computably enumerate every infinite computable tournament. However, one can define an infinite, computable, binary tree such that every infinite path computes such a sequence. See the notion of sub-uniformity defined by Mileti in [20] for details. By the low basis theorem, there exists a low set bounding a sequence containing, among others, every infinite computable tournament. As we shall prove below, for every set C and every uniformly C-computable sequence of infinite tournaments R, there exists a set G together with a ∆ 0,C 2 function f such that is total, then it is dominated by f for every e ∈ ω.
Thus it suffices to choose C to be our low set and R to be a uniformly C-computable sequence of infinite tournaments containing every computable tournament to deduce the existence of a set G together with a ∆ 0 2 function f such that (i) G is eventually R-transitive for every infinite, computable tournament R (ii) If Φ G⊕C e is total, then it is dominated by f for every e ∈ ω By the computable equivalence between AMT and the escape property, there exists a computable atomic theory T such that every atomic model computes a function g not dominated by f . If AMT ≤ c EM, then there exists an infinite, computable tournament R such that every infinite R-transitive subtournament computes a model of T , hence computes a function g not dominated by f . As the set G is, up to finite changes, an infinite R-transitive subtournament, G computes such a function g, contradicting our hypothesis. Therefore AMT ≤ c EM.
Cover classes
In this part, we introduce some terminology about classes of k-covers. Recall that a k-cover of some set X is a k-uple A 0 , . . . , A k−1 such that A 0 ∪ · · · ∪ A k−1 = X. In particular, the sets are not required to be pairwise disjoint.
Cover class. We identify a k-cover Z 0 ∪ · · · ∪ Z k−1 of some set X with the k-fold join of its parts Z = i<k Z i , and refer this as a code for the cover. A k-cover class of some set X is a tuple k, X, C where C is a collection of codes of k-covers of X. We will be interested in Π 0 1 k-cover classes. A part of a k-cover class k, X, C is a number ν < k. Informally, a part ν represents the collection of all Z ν , where Z 0 ⊕ · · · ⊕ Z k−1 ∈ C. For the simplicity of notation, we may use the same letter C to denote both a k-cover class (k, X, C) and the actual collection of k-covers C. We then write dom(C) for X and parts(C) for k.
Restriction of a cover. Given some k-cover Z = Z 0 ⊕ · · · ⊕ Z k−1 of some set X and given some set Y ⊆ X, we write Z ↾ Y for the k-cover (Z 0 ∩ Y ) ⊕ · · · ⊕ (Z k−1 ∩ Y ) of Y . Similarly, given some cover class (k, X, C) and some set Y ⊆ X, we denote by C ↾ Y the cover class (k, Y, D) where D = {Z ↾ Y : Z ∈ C}. Given some part ν of C and some set E, we write C [ν,E] for the cover class (k, X, Refinement. The collection of cover classes can be given a natural partial order as follows. for each ν < m. Given two cover classes (k, X, C) and (m, Y, D) and some function f : m → k, we say that D f -refines C if for every V ∈ D, there is some Z ∈ C such that V f -refines Z. In this case, we say that part ν of D refines part f (ν) of C.
Acceptable part. We say that part ν of C is acceptable if there exists some Z 0 ⊕ · · · ⊕ Z k−1 ∈ C such that Z ν is infinite. Part ν of C is empty if for every Z 0 ⊕ · · · ⊕ Z k−1 ∈ C, Z ν = ∅. Note that if C is non-empty and dom(C) is infinite, then C has at least one acceptable part. Moreover, if D ≤ f C and part ν of D is acceptable, then so is part f (ν) of C. The converse does not hold in general.
The forcing notion
We now get into the core of our forcing argument by defining the forcing notion which will be used to build an infinite set eventually transitive for every infinite computable tournament.
Fix a set C and a uniformly C-computable sequence of infinite tournaments R 0 , R 1 , . . . We construct our set G by a forcing whose conditions are tuples (α, every i < |α| and each ν < k. A condition (β, E, D) extends (α, F , C) (written (β, E, D) ≤ (α, F , C)) if β α and there exists a function f : parts(D) → parts(C) such that the following holds: One may think of a condition (α, F , C) with, say, parts(C) = k, as k parallel Mathias conditions which are, up to finite changes, Erdős-Moser conditions simultaneously for the tournaments R 0 , . . . , R |α|−1 . Given some i < |α|, the value α(i) indicates at which point the sets F start being R i -transitive. More precisely, for every part ν < k and every k-cover Z 0 ⊕ · · · ⊕ Z k−1 ∈ C, (F ν [0, α(i)), Z ν ) is an Erdős-Moser condition for R i for each i < |α|. Indeed, because of clause (i), the elements E ν F f (ν) added to E ν come from dom(C) and because of clause (ii), these elements must come from the part f (ν) of the class C, otherwise C [f (ν),Eν F f (ν) ] would be empty and so would be D.
Of course, there may be some parts ν of C which are non-acceptable, that is, such that Z ν is finite for every k-cover Z 0 ⊕ · · · ⊕ Z k−1 ∈ C. However, by the infinite pigeonhole principle, Z ν must be infinite for at least one ν < k. Choosing α to be in t <ω instead of ω <ω ensures that all elements added to F will have to be R i -transitive simultaneously for each i < |α|, as the elements are taken from dom(C) and therefore are greater than the threshold α(i) for each i < |α|. A part of a condition c = (α, F , C) is a pair c, ν , where ν < parts(C). For the simplicity of notation, we may identify a part c, ν of a condition with the part ν of the corresponding cover class C. It must however be clear that a part depends on the condition c.
We start with a few basic lemmas reflecting the combinatorics described in the subsection 3.1. They are directly adapted from the basic properties of an Erdős-Moser condition proven in [23]. The first lemma states that each element of the finite transitive tournaments F behaves uniformly with respect to the elements of the reservoir, that is, is beaten by every element of the reservoir or beats all of them.
Lemma 3.3 For every condition
Here, u and v may be respectively −∞ and +∞. By definition of an interval, In the latter case, by symmetry, The second lemma is the core of the combinatorics of the Erdős-Moser theorem. It provides sufficient properties to obtain a valid extension of a condition. Properties (i) and (ii) are simply the definition of an extension. Properties (iii) and (iv) help to propagate properties (b) and (c) from a condition to its extension. We shall see empirically that properties (iii) and (iv) are simpler to check than (b) and (c), as the former properties match exactly the way we add elements to our finite tournaments F . Therefore, ensuring that these properties are satisfied usually consists of checking that we followed the standard process of adding elements to F . Lemma 3.4 Fix a condition c = (α, F , C) where C is a k-cover class of [t, +∞). Let E 0 , . . . , E m−1 be finite sets, D be a non-empty Π 0,C 1 m-cover class of [t ′ , +∞) for some t ′ ≥ t and f : m → k be a function such that for each i < |α| and ν < m, If properties (i) and (ii) of an extension are satisfied for d = (α, H, D) with witness f , then d is a valid condition extending c.
Proof. All we need is to check properties (b) and (c) for d in the definition of a condition. We prove property (b). Fix an i < |α|, some part ν of D, and an x ∈ V ν for some V 0 ⊕· · ·⊕V m−1 ∈ D.
In order to prove that Here again, u and v may be respectively −∞ and +∞. By assumption, either it has a minimal and a maximal element, say x and y.
To prove minimality for the first case, assume that some w is in the interval (y, v). Then w ∈ F f (ν) [0, α(i)) by minimality of the interval (u, v) with respect to F f (ν) [0, α(i)), and w ∈ E ν by maximality of y. Minimality for the second case holds by symmetry. Now we have settled the necessary technical lemmas, we start proving lemmas which will be directly involved in the construction of the transitive subtournament. The following simple progress lemma states that we can always find an extension of a condition in which we increase both the finite approximations corresponding to the acceptable parts and the number of tournaments for which we are transitive simultaneously. Moreover, this extension can be found uniformly.
Lemma 3.5 (Progress) For every condition c = (α, F , C) and every s ∈ ω, there exists an extension d = (β, E, D) such that |β| ≥ s and |E ν | ≥ s for every acceptable part ν of D. Furthermore, such an extension can be found C ′ -effectively, uniformly in c and s.
Proof. Fix a condition c = (α, F , C). First note that for every β α such that β(i) > max(F ν : ν < parts(C)) whenever |α| ≤ i < |β|, (β, F , C) is a condition extending c. Therefore it suffices to prove that for every such condition c and every part ν of C, we can C ′ -effectively find a condition d = (α, H , D) refining c with witness f : parts(D) → parts(C) such that f forks only parts refining part ν of C, and either every such part µ of D is empty or |H µ | > |F ν |. Iterating the process finitely many times enables us to conclude.
Fix some part ν of C and let D be the collection of Z 0 ⊕ · · · ⊕ Z k−1 ∈ C such that Z ν = ∅. We can C ′ -decide whether or not D is empty. If D is non-empty, then (α, F , D) is a valid extension of c with the identity function as witness and such that part ν of D is empty. If D is empty, we can C ′ -computably find some Z 0 ⊕ · · · ⊕ Z k−1 ∈ C and pick some x ∈ Z ν . Consider the C-computable 2 |α| -partition (X ρ : ρ ∈ 2 |α| ) of ω defined by LetD be the cover class refining C [ν,x] such that part ν ofD has 2 |α| forks induced by the 2 |α| -partition X. Define H by H µ = F µ if µ refines a part different from ν, and H µ = F ν ∪ {x} if µ refines part ν of C. The forking according to X ensures that property (iv) of Lemma 3.4 holds. By Lemma 3.4, d = (α, H,D) is a valid extension of c.
The strategy
Thanks to Lemma 3.5, we can define an infinite, C ′ -computable decreasing sequence of con- As already noticed, if some acceptable part µ of C s+1 refines some part ν of C s , part ν of C s is also acceptable. Therefore, the set of acceptable parts forms an infinite, finitely branching C ′ -computable tree T . Let P be any infinite path through T . The set Our goal is to build a C ′ -computable function dominating every function computed by H(P ) for at least one path P trough T . However, it requires too much computational power to distinguish acceptable parts from non-acceptable ones, and even some acceptable part may have only finitely many extensions. Therefore, we will dominate the functions computed by H(P ) for every path P trough T .
At a finite stage, a condition contains finitely many parts, each one representing the construction of a transitive subtournament. As in the construction of a cohesive set, it suffices to check one by one whether there exists an extension of our subtournaments which will make terminate a given functional at a given input. In the next subsection, we develop the framework necessary to decide such a termination at a finite stage.
Forcing relation
As a condition c = (α, F , C) corresponds to the construction of multiple subtournaments F 0 , F 1 , . . . at the same time, the forcing relation will depend on which subtournament we are considering. In other words, the forcing relation depends on the part ν of C we focus on. Definition 3.6 Fix a condition c = (α, F , C), a part ν of C and two integers e, x.
The forcing relations defined above satisfy the usual forcing properties. In particular, let c 0 ≥ c 1 ≥ . . . be an infinite decreasing sequence of conditions. This sequence induces an infinite, finitely branching tree of acceptable parts T . Let P be an infinite path trough T . If Another important feature of this forcing relation is that we can decide C ′ -uniformly in its parameters whether there is an extension forcing Φ G⊕C e (x) to halt or to diverge. Deciding this relation with little computational power is useful because our C ′ -computable dominating function will need to decide termination Γ G⊕C (x) to check whether it has to dominate the value outputted by Γ G⊕C (x).
Furthermore, such an extension can be found C ′ -effectively, uniformly in c, e and x.
Proof. Given a condition c and two integers e, x ∈ ω, let I e,x (c) be the set of parts ν of c such that c ν Φ G⊕C e (x) ↓ and c ν Φ G⊕C e (x) ↑. Note that I e,x (c) is C ′ -computable uniformly in c, e and x. It suffices to prove that given such a condition c and a part ν ∈ I e,x (c), one can C ′ -effectively find an extension d with witness f such that f (I e,x (d)) ⊆ I e,x (c) {ν}. Applying iteratively the operation enables us to conclude.
Fix a condition c = (α, F , C) where C is a k-cover class, and fix some part ν ∈ I e,x (c). The strategy is the following: either we can fork part ν of C into enough parts so that we force Φ G⊕C e (x) to diverge on each forked part, or we can find an extension forcing Φ G⊕C e (x) to converge on part ν without forking. Hence, we ask the following question.
Q2: Is it true that for every k-cover Z 0 ⊕ · · · ⊕ Z k−1 ∈ C, for every 2 |α| -partition ρ∈2 α X ρ = Z ν , there is some ρ ∈ 2 |α| and some finite set F 1 which is R i -transitive for each i < |α| simultaneously, and such that Φ If the answer is no, then by forking the part ν of C into 2 |α| parts, we will be able to Suppose now that the answer is yes. By compactness, we can C ′ -effectively find a finite set E ⊆ Z ν for some Z 0 ⊕ · · · ⊕ Z k−1 ∈ C such that for every 2 |α| -partition (E ρ : ρ ∈ 2 |α| ) of E, there is some ρ ∈ 2 |α| and some set F 1 ⊆ E ρ which is R i -transitive simultaneously for each i < |α| and such that Φ There are finitely many 2 |α| -partitions of E. Let n be the number of such partitions. These partitions induce a finite C-computable n-partition of dom(C) defined for each (E ρ : ρ ∈ 2 |α| ) by LetD be the Π 0,C 1 (k + n − 1)-cover class refining C [ν,E] and such that part ν of C [ν,E] is refined accordingly to the above partition of dom(C). Let f : k + n − 1 → k be the refining function witnessing it. Define H as follows. For every part µ of D, refining part ν of C [ν,E] , by definition ofD, there is some 2 |α| -partition E ρ : ρ ∈ 2 |α| of E such that for every
Construction
We are now ready to construct our infinite transitive subtournament H(P ) together with a C ′ -computable function f dominating every H(P ) ⊕ C-computable function. Thanks to Lemma 3.5 and Lemma 3.7, we can C ′ -compute an infinite descending sequence of conditions (ǫ, ∅, 1 <ω ) ≥ c 0 ≥ c 1 ≥ . . . such that at each stage s ∈ ω, where c s = (α s , F s , C s ). Property 1 ensures that the resulting set with be eventually transitive for every tournament in R. Property 2 makes the subtournaments infinite. Last, property 3 enables us to C ′ -decide at a finite stage whether a functional terminates on a given input, with the transitive subtournament as an oracle. Define the C ′ -computable function f : ω → ω as follows: On input x, the function f looks at all stages s such that s = e, x for some e ≤ x. For each such stage s, and each part ν in C s , the . Having done all that, f returns a value greater than the maximum of the computed values.
Fix any infinite path P trough the infinite tree T of the acceptable parts induced by the infinite descending sequence of conditions. We claim that f dominates every function computed by H(P )⊕ C. Fix any Turing index e ∈ ω such that Φ H(P )⊕C e is total. Consider any input x ≥ e and the corresponding stage s = e, x . As Φ and returns a greater value. As F s,P (s) is an initial segment of H(P ), Φ . This completes the proof of AMT ≤ c EM. We identify a k-cover Z 0 ∪ · · · ∪ Z k−1 of some set X with the k-fold join of its parts
The domination framework
The actual proof of Theorem 3.1 is slightly stronger than its statement as it creates a degree d bounding EM together with a computable instance X of AMT such that d bounds no solution to X. Therefore, having solutions to multiple tournaments in parallel is not enough to compute a solution to X. One may however ask whether sequential applications of EM (that is, defining a tournament such that every transitive subtournament will be used to define another tournament and so on) is enough to compute a solution to X.
Answering negatively this question requires to diagonalize against solutions Y 0 to computable instances of EM, but also against solutions Y 1 to Y 0 -computable instances of EM and so on. The difficulty comes from the fact that diagonalizations happen at finite stages, at which we have only access to a finite approximation of Y 0 , and so to a finite part of the Y 0 -computable instances of EM. Thankfully, we only need a finite piece of an EM-instance to diagonalize against its solutions.
In this section, we develop a framework for building an ω-structure M satisfing some principle P such that every function in M is dominated by a single ∅ ′ -computable function. Since by definition, the first-order part of an ω-structure is the set of standard natural numbers, ωstructures are characterized by their second-order part. An ω-structure satisfies RCA 0 if and only if its second-order part is a Turing ideal, i.e., a set of reals I closed under the effective join and the Turing reduction.
The whole construction will be done by iterating uniformly and ∅ ′ -effectively the forcing constructions presented in the previous sections. We will not directly deal with the concrete forcing notion used for constructing solutions to EM-instances. Instead, we will manipulate an abstract partial order of forcing conditions. Abstracting the construction has several advantages: 1. It enables the reader to focus on the operations which are the essence of the construction.
The reader will not be distracted by the implementation subtleties of EM which are not insightful to understand the overall structure. 2. The construction is more modular. We will be able to implement modules for EM and WKL independently, and combine them in section 6 to obtain a proof that EM ∧ WKL does not imply AMT, and this without changing the main construction. This also enable reverse mathematicians to prove that other principles do not imply AMT without having to reprove the administrative aspects of the construction.
We shall illustrate our definitions with the case of COH in order to give a better intuition about the abstract operators we will define. As explained in section 3, the separation of COH from AMT is already a consequence of the separation of EM from AMT. Therefore implementing the framework with COH is only for illustration purposes.
Support
The first step consists of defining the abstract partial order which will represent the partial order of forcing conditions. We start with an analysis of the common aspects of the different forcing notions encountered until yet, in order to extract their essence and define the abstract operators. In the following, we shall employ stage to denote a temporal step in the construction. An iteration is a spatial step representing progress in the construction of the Turing ideal. Multiple iterations are handled at a single stage.
Parts of a condition. When constructing cohesive sets for COH or transitive subtournaments for EM, we have been working in both cases with conditions representing parallel Mathias conditions. We shall therefore associate to our abstract notion of condition a notion of part representing one of the solutions we are building. A single abstract condition will have multiple parts representing the various candidate solutions constructed in parallel for the same instance.
For example, in the forcing notion for COH, a condition c = (F ν : ν ∈ 2 n ) can be seen as 2 n parallel Mathias conditions (F ν , R ν ) where R 0 , R 1 , . . . is the universal instance of COH. In this setting, the parts of c are the pairs c, ν for each ν ∈ 2 n . One may be tempted to get rid of the notion of condition and directly deal with its parts since in COH, a condition is only the tuple of its parts. However, in the forcing notion c = ( F , C) for EM, the parts are interdependent since adding element to some F ν will remove inconsistent covers from C and therefore may restrict the reservoirs of the other parts.
Satisfaction. As explained, a part represents the construction of one solution, whereas a condition denotes multiple solutions in parallel. We can formalize this intuition by defining a satisfaction function which, given a part of a condition, returns the collection of the sets satisfying it. For example, a set G satisfies part ν of the COH condition c = (F ν : Initial condition. In a standard (i.e. non-iterative) forcing, we build an infinite decreasing sequence of conditions, starting from one initial condition c 0 . In COH, this initial condition is (∅, ε), where ε is the empty string. Since R ε = ω, this coincides with the standard initial Mathias condition (∅, ω). In an iterative forcing, we add progressively new iterations by starting a new decreasing sequence of conditions below each part of the parent condition. Since COH admits a universal instance, there is no need to choose which instance we want to solve at each iteration. However, in the case of EM, we will take a new EM-instance each time, so that the resulting Turing ideal is the second-order part of an ω-model satisfying EM. Therefore, an EM-condition is in fact a condition ( F , C, R) where R is an instance of EM. The chosen instance of EM will be decided at the initialization of a new iteration and will be preserved by condition extension. The choice of the instance depends only on the iteration level. Therefore we can define an initialization function which, given some integer, returns the initial condition together with the chosen instance.
Parameters. The difficulty of the iterative forcing comes from the fact that an instance of the principle P may depend on the previous iterations. During the construction, the partial approximations of the previous iterations become more and more precise, enabling the instance at the next iteration to be defined on a larger domain. In the definition of our abstract partial order, we will use a formal parameter D which will represent the join of the constructed solutions in the previous iterations. For example, in the formal definition of the partial order for COH, we will say that some condition d = (E µ : µ ∈ 2 m ) extends another condition c = (F ν : This syntactic constraints has to be understood as ( for every set X = X 0 ⊕ · · · ⊕ X n−1 such that X i satisfies the ancestor of d in the iteration axis at the ith level. In the case of COH, only a finite initial segment of X is needed to witness the extension. We are now ready to define the notion of module support. (1) (P, ≤ P ) is a partial order. The set P has to be thought of as the set of forcing conditions. Therefore, the elements of P will be called conditions. (2) U is a set of parts. The notion of part is due to the fact that most of our forcing conditions represent multiple objects built in parallel. (3) parts : P → P f in (U) is a computable function which, given some condition c ∈ P, gives the finite set of parts associated to c. (4) init : N → P is a computable function which, given some integer n representing the iteration level, provides the initial condition of the forcing at the nth iteration. (5) sat : U → P(2 ω ) is a function which, given some part ν of some condition c, returns the collections of sets satisfying it.
Furthermore, a module support is required to satisfy the following property: (a) If d ≤ P c for some c, d ∈ P, then there is a function f : parts(d) → parts(c) such that sat(ν) ⊆ sat(f (ν)) for each ν ∈ parts(d). We may write it d ≤ f c and say that f is the refinement function witnessing d ≤ P c.
Given two conditions c, d ∈ P such that d ≤ f c, we say that f forks part ν of c if |f −1 (ν)| ≥ 2. This forking notion will be useful in the definition of a module. Let us illustrate the notion of module support by defining one for COH.
Modules
We previously defined the abstract structure we shall use as a support of the construction. The next step consists of enriching this structure with a few more operators which will enable us to decide Σ 0 1 properties over the constructed sets. The success or failure in forcing some property will depend on the parts of a condition. Note that at a finite stage, we handle a finite tree of conditions. We can therefore cover all cases by asking finitely many questions. Let us go back to the COH example, and more precisely how we decided Σ 0 1 properties over it. Iteration 1. At the first iteration, we would like to decide whether the Σ 0,G 1 formula ψ(G) = (∃s, m)(Φ G e,s (n) ↓= m) will hold, where G is a formal parameter denoting the constructed set. Furthermore, we want to collect the value of Φ G e (n) if it halts. The formula ψ(G) can be seen as a query, whose answers are either No if ψ(G) does not hold, or a tuple Yes, s, m such that Φ G e,s (n) ↓= m if ψ(G) holds. Given some condition c = (F ν : ν ∈ 2 n ), we can ask on each part c, ν whether the formula ϕ(G) will hold or not, by boxing the query ψ(G) into a Σ 0 1 query φ without the formal parameter G, such that φ holds if and only if we can find an extension d of c forcing ψ(G) on the parts of d refining part ν of c. Concretely, we can define φ as follows: This query can be ∅ ′ -decided. If the formula φ holds, we can effectively find some answer to φ, that is, a tuple Yes, F 1 , s, m such that F 1 ⊆ R ν [0, max(F ν )] and Φ Fν∪F 1 e,s (n) ↓= m. The extension d obtained by adding F 1 to F ν forces ψ(G) to hold for every set G satisfying the part d, ν of the condition d. The answer to ψ(G) is obtained by forgetting the set F 1 from the answer to φ. On the other hand, if the formula φ does not hold, the answer is No and c already forces ψ(G) not to hold.
Iteration 2. At the second iteration, we work with conditions c 1 = (E µ : µ ∈ 2 m ) which are below some part ν of some condition c 0 = (F ν : ν ∈ 2 n ) living at the first iteration level. We want to decide Σ 0,G 0 ,G 1 1 formulas, where G 0 and G 1 are formal parameters denoting the sets contructed at the first iteration and at the second iteration, respectively. We basically want to answer queries of the form We will ask this question on each part of c 1 . By the same boxing process as before applied relative to c 1 , we obtain a formula ψ(G 0 ) getting rid of the formal parameter G 1 , and defined by The formula ψ(G 0 ) is a now a query at the first iteration level. We can apply another boxing to ψ(G 0 ) relative to c 0 to obtain a Σ 0 1 formula φ without any formal parameter.
This formula can again be ∅ ′ -decided. If it holds, an answer a = Yes, F 1 , E 1 , s, m can be given. At the first iteration level, we unbox the answer a to obtain a tuple b = Yes, E 1 , s, m and an extension d 0 of c 0 . The extension d 0 forces the tuple b to answer the query ψ(G 0 ) and is obtained by adding F 1 to F ν . At the second iteration level, we unbox again the answer b to obtain a tuple Yes, s, m and an extension d 1 to c 1 , forcing Yes, s, m to answer the query ϕ(G 0 , G 1 ). The whole decision process is summarized in Figure 3.
Progress. We may also want to force some specific properties required by the principle P. In the case of Ramsey-type principles, we need to force the set G to be infinite. This can be done with the following query for each k: Part µ of (E µ : µ ∈ 2 m ) Part ν of (F ν : ν ∈ 2 n ) ∅ ′ oracle Box ϕ(G 0 , G 1 ) The progress query can take various forms, depending on the considered principle. For example, in WKL, we need to force the path to be infinite by asking the following question for each k: (∃σ ∈ 2 k )[σ ≺ G] We will therefore define some progress operator which outputs some query that the construction will force to hold or not. We will choose the actual forcing notions so that the formula can be forced to hold for at least one part of each condition. The parameter k will not be given to the operator, since it can be boxed into the current condition, in a monadic style.
We are now ready to define the notion of module as a module support enriched with some boxing, unboxing and progress abstract operators. In what follows, Query[ X] is the set of all Σ 0 1 formulas with X as formal parameters, and Ans[ X] is the set of their answers. given some part ν of some condition c ∈ P and some answer a to a Σ 0 1 formula ψ(D) encoding a Σ 0 1 formula ϕ(D, G), outputs a tuple d, f, g such that d ≤ f c where f forks only part ν of c, and for every part µ of d such that f (µ) = ν, and every set G ∈ sat(µ), g(µ) is an answer to ϕ(D, G).
(3) prog : U → Query[D, G] is a computable function which provides a question forcing some progress in the solution. It usually asks whether we can force the partial approximation to be defined on a larger domain.
Let us go back to the COH case. Define the COH module S, box, unbox, prog as follows: S is the COH module support previously defined. Given some condition c = (F ν : ν ∈ 2 n ), some ν ∈ 2 n and some query ϕ(D, G), box( c, ν , ϕ) is the query ψ(D) defined by Set unbox( c, ν , No ) = c, id, g where id is the identity function and g(ν) = No . Given an answer a = Yes, F 1 , a 1 to the question ψ(D), unbox( c, ν , a) = d, f, g where d = (E µ : µ ∈ 2 n ) is an extension of c such that E ν = F ν ∪ F 1 , and E µ = F µ whenever µ = ν. The function f : U → U is defined by f ( d, µ ) = c, µ for each µ ∈ 2 n . The function g : U → Ans[D, G] is the constant function defined by g( d, µ ) = Yes, a .
We claim that f is a refinement function witnessing d ≤ c. For every µ = ν, . By definition of an answer, When considering cohesiveness, we must ensure an additional kind of progress. Indeed, we must partition the reservoir according to (R σ : σ ∈ 2 n ) for larger and larger n. We can slightly modify the forcing notion for COH and "hack" this kind of progress in the unbox operator by making it return a condition whose parts are split accordingly. Since the separation of EM from AMT entails the separation of COH from AMT, we will not go into the details for fixing this progress issue.
Construction
We will construct an infinite sequence of trees of conditions by stages, such that each level corresponds to one iteration. We will add progressively more and more iterations, so that the limit tree is of infinite depth. In order to simplify the presentation of the construction, we need to introduce some additional terminology.
Definition 4.3 (Stage tree)
A stage tree is a finite tree T whose nodes are conditions and whose edges are parts of conditions. It is defined inductively as follows: A stage tree of depth 0 is a condition. A stage tree of depth n + 1 is a tuple c, h where c is a condition and h is a function such that h(ν) is a stage tree of depth n for each ν ∈ parts(c).
We consider that the stage subtree h(ν) is linked to c by an edge labelled by ν. The root of T is itself if T is a stage tree of depth 0. If T = c, h then the root of T is c. According to our notation on trees, if T = c, h , we write T [ν] to denote h(ν). We also write T ↾k to denote the restriction of T to its stage subtree of depth k. At each stage s of the construction, we will end up with a stage tree of depth s. The initial stage tree will be T 0 = init(0). There is a natural notion of stage tree extension induced by the extension of its conditions. Definition 4.4 (Stage tree extension) A stage tree T 1 of depth n extends a stage tree T 0 of depth 0 if there is a function f such that c 1 ≤ f T 0 where c 1 is the root of T 1 . We say that f is a refinement tree of depth 0 and write T 1 ≤ f T 0 . A stage tree T 1 = c 1 , h 1 of depth n + 1 extends a stage tree T 0 = c 0 , h 0 of depth m + 1 if there is a function f such that c 1 ≤ f c 0 and a function r such that r(ν) is a refinement tree of depth m such that h 1 (ν) ≤ r(ν) h 0 (f (ν)) for each part ν of c 1 . The tuple R = f, r is a refinement tree of depth m + 1 and we write T 1 ≤ R T 0 .
Note that if T 1 ≤ R T 0 , where T 1 is a stage tree of depth n and T 0 is a stage tree of depth m, then n ≥ m. We may also write T 1 ≤ T 0 if there is a refinement tree R of depth m such that At each stage, we will extend the current stage tree to a stage tree of larger depth and whose conditions force more and more properties. The resulting sequence of stage trees T 0 ≥ T 1 ≥ . . . can be seen as a 2-dimensional tree with the following axes: − The stage axis is a temporal dimension. Let c 0 ≥ c 1 ≥ . . . be such that c s is the root of T s for each stage s. As we saw in the computable non-reducibility case, the Figure 4. In this example, the refinement tree R whose nodes are {f 0 , f 1 , f 2 , f 3 } witnesses the extension of the stage tree T 0 whose nodes are {c 0 , d 0,0 , d 0,1 } by the stage tree T 1 whose nodes are {c 1 , d 1,0 , d 1,1 , d 1,2 , e 1,0 , e 1,1 , e 1,2 }. The condition c 1 has three parts, and f 0 -refines the condition c 0 which has two parts. The conditions d 1,0 , d 1,1 and d 1,2 have only one part. The path c 1 − d 1,1 − e 1,1 trough the tree T 1 R-refines the path c 0 − d 0,0 through the tree T 0 . parts of this sequence forms an infinite, finitely branching tree. Let P be any infinite path through this tree. More formally, P is a sequence ν 0 , ν 1 , . . . such that ν s+1 is a part of c s+1 refining the part ν s in c s for each s. Consider now the sequence of stage trees T ≥ . . . The sequence lives at the second iteration level, below the path P . Its roots induce another infinite, finitely branching tree, and so on. Therefore, at each level, we can define an infinite, finitely branching tree of parts, once we have fixed the path P through the tree of parts at the previous level. − The iteration axis is a spatial (or vertical) dimension corresponding to the depth. The notion of stage tree makes explicit the finite tree obtained when fixing a stage. A path through a stage tree corresponds to the choices made at each level, between the different parts of a condition. We did not define the notion of acceptable part in this framework. Therefore, the choice of the part is delegated to the module, which will have to justify that at least one of the parts is extensible.
Definition 4.5 (Partial path) A partial path ρ through a stage tree T of depth n is defined inductively as follows: A partial path through a stage tree T of depth 0 is a part of T . A partial path through a stage tree T = c, h of depth n + 1 is either a part of c, or a sequence ν, ρ where ν is a part of c and ρ is a partial path through h(ν). A path through T is a partial path of length n + 1.
We denote by P (T ) and by P P (T ) the collection of paths and partial paths through T , respectively. Note that a partial path has length at least 1. The notion of refinement between partial paths is defined in the natural way. We can also extend the notation T [ρ] to partial paths ρ through T with the obvious meaning. There is also a notion of satisfaction of a stage tree induced by the sat operator. Definition 4.6 (Stage tree satisfaction) A set G 0 satisfies a partial path ν 0 through a stage tree T of depth 0 if G 0 ∈ sat(ν 0 ). A tuple of sets G 0 , G 1 , . . . , G k satisfies a partial path ν 0 , . . . , ν k through a stage tree T = c, h of depth n + 1 if G 0 ∈ sat(ν 0 ) and k = 0, or G 1 , . . . , G k satisfies the partial path ν 1 , . . . , ν k through the stage tree h(ν 0 ). A tuple of sets G 0 , G 1 , . . . , G k satisfies a stage tree T of depth n if it satisfies a partial path through T .
We now prove a few lemmas stating that we can compose locally the abstract operators to obtain some global behavior. The first trivial lemma shows how to increase the size of a stage tree. This is where we use the operator init. Lemma 4.7 (Growth lemma) For every stage tree T 0 of depth n and every m, there is a stage tree T 1 of depth n + 1 such that T 1 ↾n = T 0 , and whose leaves are init(m). Moreover, T 1 can be computably found uniformly in T 0 .
Proof. The proof is done inductively on the depth of T 0 . In the base case, T 0 is a stage tree of depth 0 and is therefore a condition c 0 . Let h be the function such that h(ν) = init(m) for each ν ∈ parts(c 0 ). The tuple T 1 = c 0 , h is a stage tree of depth 1 such that T 1 ↾0 = c 0 = T 0 . It can be computably found uniformly in T 0 since init and parts are computable. Suppose now that T 0 = c 0 , h 0 is a stage tree of depth n+1. By induction hypothesis, we can define a function h 1 such that for each ν ∈ parts(c 0 ), h 1 (ν) is a stage tree of depth n + 1 and h 1 (ν)↾n = h 0 (ν). The tuple T 1 = c 0 , h 1 is a stage tree of depth n + 2 such that T 1 ↾n + 1 = c 0 , h 0 = T 0 .
We will always apply the growth lemma in the case m = n + 1. However, the full statement was necessary to apply the induction hypothesis. Note that, since T 1 ↾n = T 0 , we have T 1 ≤ T 0 as witnessed by taking the refinement tree of identity functions. The next lemma states that we can, given some stage tree T 0 and some query ϕ(D, G), obtain another stage tree T 1 ≤ T 0 in which we have decided ϕ(D, G) at every part of every condition in T 0 . Its proof is non-trivial since when forcing some property, we may increase the number of branches of the stage tree. We need therefore to define some elaborate decreasing property to prove termination of the procedure. The query lemma is assumed yet and will be proven in subsection 4.4.
Lemma 4.8 (Query lemma) Let T 0 be a stage tree of depth n and q : P P (T ) → Query[D, G] be a computable function. There is a stage tree T 1 ≤ T 0 of depth n such that every partial path ξ through T 1 refines a partial path ρ through T 0 for which T 1 ξ q(ρ) or T 1 ξ ¬q(ρ), Moreover, T 1 and the function of answers a : P P (T 1 ) → Ans[D, G] can be ∆ 0 2 -found uniformly from T 0 .
The following domination lemma is a specialization of the query lemma by considering queries about termination of programs. Lemma 4.9 (Domination lemma) For every stage tree T 0 of depth n, there is a stage of tree T 1 ≤ T 0 of depth n and a finite set U ⊂ ω such that for every tuple G 0 , . . . , G n satisfying T 1 and every e, x, i ≤ n, Φ G 0 ⊕···⊕G i e (x) ∈ U whenever Φ G 0 ⊕···⊕G i e (x) halts. Moreover, T 1 and U can be ∆ 0 2 -found uniformly from T 0 .
Proof. Apply successively the query lemma with q(ξ) = (∃s, m)Φ D⊕G e,s (x) ↓= m for each e, x ≤ n, in order to obtain the tree T 1 together with an upper bound k to the answers to q(ρ). We claim that the set U = [0, k] satisfies the desired property. Let G 0 , . . . , G n be a tuple satisfying T 1 , and let e, x, i < n be such that Φ G 0 ⊕···⊕G i e (x) ↓. By definition of satisfaction, there is some partial path ρ through T 1 such that G 0 , . . . , G i satisfies ρ. By the query lemma, ↓, the former holds, and k is greater than the answer to the query, so is greater than m. Uniformity is inherited from the query lemma.
We construct an infinite ∆ 0 2 sequence of finite trees of conditions T 0 ≥ T 1 ≥ . . . as follows: At stage 0, we start with a stage tree T 0 of depth 0 defined by init(0). At each stage s > 0, assuming we have defined a stage tree T s−1 of depth s − 1, act as follows: (S1) Growth: Apply the growth lemma to obtain a stage tree T 1 s ≤ T s−1 of depth s. Intuitively, this step adds a new iteration and therefore ensures that the construction will have eventually infinitely many levels of iteration. (S2) Progress: Apply to T 1 s the query lemma with q = prog to obtain a stage tree T 2 s ≤ T 1 s such that the progress function is forced at each partial path. This step ensures that for every tuple G 0 , G 1 , . . . such that G 0 , . . . , G k satisfies each T s , s ≥ k, the progress query will have been decided on G i infinitely many times. (S3) Domination: Apply to T 2 s the domination lemma to obtain a stage tree T s ≤ T 2 s and a finite set U such that for every tuple G 0 , . . . , G s satisfying T s and every e, x, i ≤ s, if Φ G 0 ⊕···⊕G i e (x) halts, then its value will be in U . Since the whole construction is ∆ 0 2 and we uniformly find such a set U , this step enables us to define a ∆ 0 2 function which will dominate every function in the Turing ideal.
Queries
In this section, we develop the tools necessary to prove the query lemma (Lemma 4.8). Given some stage tree T 0 and some query function q : P P (T ) → Query[D, G], the query lemma states that we can find a stage tree T 1 extending T 0 and which forces either q(ρ) or its negation on each partial path through T 1 refining the partial ρ through T 0 . The stage tree T 0 is finite and has therefore finitely many partial paths. The naive algorithm would consist of taking an arbitrary partial path ρ through T 0 , then decide q(ρ) thanks to the process illustrated in Figure 3 and extend T 0 into a stage tree T 1 which forces q(ρ) or its negation on every path refining ρ. One may expect to obtain the query lemma by iterating this process finitely many times.
The termination of the algorithm depends on the shape of the extension T 1 obtained after deciding q(ρ). We need to ensure that we made some progress so that we will have covered all paths at some point. Let us look more closely at the construction of the extension T 1 . Given some query ϕ(D, G) and some part ν, we call the unbox(ν, ϕ) operator to obtain another query ψ(D) getting rid of the forcing variable G. Using ∅ ′ , we obtain an answer a to the formula ϕ(∅) and then call unbox(ν, a) to obtain some extension forking only ν, and forcing either ϕ(D, G) or its negation on every part refining ν. This extension may therefore increase the number of parts, but ensures some progress on each of the forked parts.
If T 0 is a stage tree of depth 0, the termination of the process is clear. Indeed, T 0 is a condition c 0 and the partial paths through T 0 are simply the parts of c 0 . We end up with a stage tree T 1 of depth 0 corresponding to some condition c 1 , on which we have decided ϕ(D, G) for every part of c 1 refining some part ν of c 0 . Since we have not forked any other part than ν, the number of undecided parts strictly decreases. A condition has finitely many parts, so the process terminates after at most |parts(c 0 )| steps.
The progress becomes much less clear if T 0 is a stage tree of depth 1. When trying to decide some query on some path ν 0 , ν 1 through T 0 , we need to extend both the root, and the conditions below each part µ refining ν 0 . The overall number of undecided paths may increase, and therefore a simple cardinality argument is not enough to deduce termination. Note that this algorithm has some common flavor with the hydra game introduced by Kirby and Paris [17] and whose termination is not provable in Peano arithmetic. Thankfully, our problem is much simpler and its termination can be proven by elementary means.
In Figure 5, we give an example of one step in the decision process, starting with a stage tree T 0 of depth 1 with three undecided paths, and ending up with some stage tree T 1 having four undecided paths ( 1 and c 1 − d 1,0 − µ 1,2 ). Thankfully, the unbox operator forks only the part on which it answers the query. Therefore, at the next step, we will be able to consider only one of the parts of c 1 at a time. The induced subtree has two undecided paths, so there is also some progress.
We now define some relation ⊏ between two stage trees T 0 and T 1 of depth n. It describes the relation between the stage tree T 0 and the extension T 1 obtained after applying one step of the query algorithm. More precisely, T 2 ⊏ T 0 if T 2 is the subtree of T 1 on which we have removed every decided paths. Figure 5. In this example, we start with a stage tree T 0 of depth 1 and want to decide some query ϕ(D, U ) for each of its three paths. We choose one path ρ = c 0 − d 0 − ν 0,0 , call box(ν 0,0 , ϕ) to obtain a query ψ(D), then call box(λ, ψ), where λ is the unique part of c 0 . We obtain a query φ(D), ∅ ′ -compute some answer a to φ(∅) and call unbox(λ, a) to obtain some extension c 1 of c 0 and some answering function b : parts(c 1 ) → Ans [D, G]. This extension forks the part λ into two parts. Below each part λ i in c 1 , we call unbox(λ i , b(λ i )) to obtain an extension of d 1,i forcing ϕ(D, G) below the parts refining ν 0,0 .
One easily proves by mutual induction over the depth of the trees the following facts: (i) Both ⊏ and ⊑ are transitive (ii) If T 1 ⊏ T 0 then T 1 ⊑ T 0 (iii) If T 2 ⊑ T 1 and T 1 ⊏ T 0 then T 2 ⊏ T 0 (iv) If T 2 ⊏ T 1 and T 1 ⊑ T 0 then T 2 ⊏ T 0 Assuming that ⊏ truly represents the relation between a stage tree and its extension after one step of query, the following lemma can be understood as stating that the naive algorithm used in the proof of the query lemma terminates.
Lemma 4.11
The relation T 1 ⊏ T 0 is well-founded.
Proof. By induction over the depth of the stage trees. Suppose that T 0 ⊐ T 1 ⊐ . . . is an infinite decreasing sequence of stage trees of depth 0. In particular, the T 's are conditions and T 0 ≥ f 0 T 1 ≥ f 1 . . . for some functions f i which are injective, but not surjective. Therefore the number of parts strictly decreases in ω, contradiction.
Suppose now that T 0 ⊐ T 1 ⊐ . . . is an infinite decreasing sequence of stage trees of depth n + 1, where T i = c i , h i and c i ≥ f i c i+1 . Let S be the set of parts ν in some c i which will fork at a later c j . This S induces a finitely branching tree. If S is finite, then there is some j such that no part of c k will ever fork for every k ≥ j. By the infinite pigeonhole principle, there we can construct an infinite, decreasing sequence of trees of depth n, contradicting our induction hypothesis. So suppose that S is infinite. By König's lemma, there is an infinite sequence of parts, the later refining the former, such that they fork. Each time a conditions fork, the subtree is strictly decreasing, so we can define an infinite decreasing sequence of stage trees of depth n, again contradicting our induction hypothesis.
Given some stage tree T 1 of depth i < n, a completion of T 1 to n is a stage tree T 2 of depth n such that T 1 ↾i = T 0 . If T 1 ≤ T 0 ↾i for some stage tree T 0 of depth n, T 0 induces a completion T 2 of T 1 to n by setting T 0 for every path ξ through T 1 refining some path ρ through T 0 ↾i. One easily checks that T 2 ≤ T 0 . Such a stage tree is called the trivial completion of T 1 by T 0 . The following technical lemma will be useful for applying the induction hypothesis in Lemma 4.14.
Lemma 4.12 Let T 0 , T 1 be two stage trees of depth n + 1 and T 2 be a stage tree of depth n and S 0 be a set of paths through T 0 ↾n such that (i) T 2 ⊑ T 0 ↾n, P (T 2 ) ⊆ P (T 1 ↾n) and T 1 ≤ T 0 (ii) S 0 is the set of paths through T 0 ↾n refined by some path through T 2 . (iii) For every path ξ ∈ P (T 2 ), T 0 where ξ refines the path ρ ∈ S 0 (iv) For every path ξ ∈ P (T 1 ↾n) P (T 2 ), ξ refines some path ρ ∈ P (T 0 ↾n) S 0 and T Proof. By induction over n. In the base case, T 0 ↾n, T 1 ↾n and T 2 are conditions c 0 , c 1 and c 2 such that c 2 ≤ f c 0 and c 1 ≤ g c 0 for some refinement functions f and g. We easily have for every part µ of c 2 (and therefore of c 1 ), and since T whenever µ is a part of c 1 which is not a part of c 2 . By c 2 ⊑ c 0 , the only places where a fork can happen is when µ is not in c 2 .
We now want to prove that T 1 ⊏ T 0 whenever c 2 ⊏ c 0 . Since c 2 ⊏ c 0 , f is injective, but not surjective. We need to prove that there is some part ν of T 1 such that T . We have two cases. In the first case, f and g have the same domain. In this case f = g and since f is not surjective, there is some part of c 0 witnessing the strictness of T 1 ⊏ T 0 . In the second case, there is some part ν in c 1 but not c 2 . By (iv), g(ν) ∈ S 1 . The part g(ν) of c 0 witnesses the stricteness of T 1 ⊏ T 0 .
In the induction case, T 0 ↾n = c 0 , h 0 , T 1 ↾n = c 1 , h 1 and T 2 = c 2 , h 2 such that c 2 ≤ f c 0 and c 1 ≤ g c 0 for some refinement functions f and g. For every part ν in c 1 , we have two cases: In the first case, ν is not in c 2 . By (iv), any path ξ through h 1 (ν) refines some path ρ in h 0 (g(ν)) such that h 1 (ν) [ξ] ⊏ h 0 (g(ν)) [ρ] . By the induction hypothesis applied to h 0 (ν), h 1 (ν) and the empty tree, h 1 (ν) ⊏ h 0 (g(ν)). In the second case, ν is also in c 2 . By the induction hypothesis applied to h 0 (ν), h 1 (ν) and h 2 (ν), h 1 (ν) ⊑ h 0 (g(ν)). We again easily have T 1 ⊑ T 0 since h 1 (ν) ⊑ h 0 (g(ν)) for every part ν in c 1 and since whenever g forks some part µ of c 0 , either the parts ν of c 1 refining µ are all in c 2 in which case h 2 (ν) ⊏ h 0 (µ)↾n by the definition of the partial order and then we have h 1 (ν) ⊏ h 0 (µ), or none of the parts ν of c 1 refining µ are in c 2 , in which case we have h 1 (ν) ⊏ h 0 (µ). By the same case analysis as in the base case, we deduce that T 1 ⊏ T 0 if moreover T 2 ⊏ T 0 ↾n. Definition 4.13 (Stage tree substration) Given a stage tree T of depth n and a set S of paths through T , we define T − S inductively as follows: If T is a stage tree of depth 0, then S is a set of parts of T and T − S is the condition whose parts are parts(T ) S. If T = c, h is a stage tree of depth n + 1, then S is a set of paths of the form νρ where ν is a part of c and ρ is a path through h(ν). For each part ν, let S ν = {ρ : νρ ∈ S}. The stage tree T − S is defined by c, h 1 where h 1 (ν) = h(ν) − S ν for each part ν of c.
Intuitively, T − S is the maximal subtree of T such that P (T − S) = P (T ) S. Beware, even if we may remove every part of a condition, we do not remove the condition from the tree. The following lemma uses the well-founded partial order defined previously to show that we can make some progress in deciding the queries. In what follows, the set T 0 can be thought of as the stage tree we obtain after having applied finitely many steps of query and S 0 are the paths through the tree T 0 for which we have already decided the query ϕ(D, G). The lemma describes the relation between the state (T 1 , S 1 ) obtained from (T 0 , S 0 ) after having applied one more step.
Lemma 4.14 Let T 0 be a stage tree of depth n, S 0 be a set of paths through T 0 and let ϕ(D, G) be a query. For every path ρ ∈ S 0 through T 0 , there exists a stage tree T 1 ≤ T 0 of depth n and a set S 1 of paths through T 1 such that (i) T 1 ξ ϕ(D, G) or T 1 ξ ¬ϕ(D, G) for every path ξ through T 1 refining ρ.
(ii) T 1 − S 1 ⊏ T 0 − S 0 (iii) Every path in S 1 refines either a path in S 0 or ρ.
Moreover, T 1 and the function of answers a : P (T 1 ) → Ans[D, G] can be ∅ ′ -effectively computed uniformly in T 1 and ϕ(D, G).
Proof. By induction over n. If T 0 is a stage tree of depth 0, then it is a condition c 0 and the paths through T 0 are the parts of c 0 . Let ν be such a part. Let ψ(D) be the query box(ν, ϕ). We can ∅ ′ -compute an answer a 0 to ψ(∅). Let c 1 , f, a = unbox(ν, a 0 ) be such that c 1 ≤ f c 0 , f forks only part ν of c 0 and for every part µ of c 1 such that f (µ) = ν, c 1 µ ϕ(D, G) or c 1 µ ¬ϕ(D, G) and a(µ) answers ϕ(D, G) accordingly. Take The property (i) holds by definition of c 1 and (iii) holds by definition of S 1 . Since the only forked part is ν and no part of c 1 − S 1 refines ν, c 1 − S 1 ⊏ c 0 − S 0 , so the property (ii) also holds. This completes the base case.
Suppose now that T 0 is a stage tree of depth n + 1. The paths through T 0 are of the form ρν where ρ is a path through T 0 ↾n and ν is a part of the root of T 0 . Fix any such path. Let ψ(D) be the query box(ν, ϕ) and let φ(D, G) be the formula ψ(D ⊕ G). By induction hypothesis on T 0 ↾n, there is a stage tree T 2 ≤ T 0 ↾n and a set S 2 such that (i) T 2 ξ φ or T 2 ξ ¬φ for every path ξ through T 2 refining ρ (ii) T 2 − S 2 ⊏ T 0 ↾n − S 0 ↾n (iii) Every path in S 2 refines either a path in S 0 ↾n or ρ.
Moreover, still by induction hypothesis, we have a function a : P (T 2 ) → Ans[D, G] answering the queries. We define a completion of T 2 into a stage tree T 1 of depth n + 1 as follows: For each path ξ through T 2 refining ρ, let T [ξ] 1 be the condition c ξ such that c ξ , f ξ , a ξ = unbox(ν, a(ξ)). For each path ξ through T 2 which refines some path τ through T 0 different from ρ, let T 0 whenever ξ refines ρ and since any condition refines itself. Let S 1 be the collection of paths ξµ through T 1 such that ξ ∈ S 2 and either ξ refines ρ and f ξ (µ) = ν, or ξµ refines a path in S 0 . Since (T 0 ↾n − S 0 ↾n) ⊑ (T 0 − S 0 )↾n, we have T 2 − S 2 ⊏ (T 0 − S 0 )↾n. We can therefore apply Lemma 4.12 to T 0 − S 0 , T 1 − S 1 , and T 2 − S 2 , to obtain T 1 − S 1 ⊏ T 0 − S 0 . Define the answer function b : P (T 1 ) → Ans[D, G] by b(ξµ) = a ξ (µ) for each path ξ through T 2 refining ρ. This function b is found ∅ ′ -effectively since the unbox operator is computable.
The following lemma simply iterates Lemma 4.14 and uses the well-foundedness of the relation ⊏ to deduce that we can find some extension on which the queries are decided for every path. There is a stage tree T 1 ≤ T 0 of depth n such that T 1 ξ q(ρ) or T 1 ξ ¬q(ρ) for every path ξ through T 1 refining some path ρ through T 0 . Moreover, T 1 and the function of answers a : P (T 1 ) → Ans[D, G] can be ∅ ′ -effectively computed uniformly in T 1 and q.
Proof. Using Lemma 4.14, define a sequence of tuples T 0 , S 0 , ρ 0 , τ 0 , T 1 , S 1 , ρ 1 , τ 1 , . . . starting with T 0 , S 0 = ∅, ρ 0 = τ 0 ∈ P (T 0 ) and such that for each i Every path in S i+1 refines either a path in S i or ρ i . By Lemma 4.11, the relation ⊏ is well-founded, so the sequence has to be finite by (iv). Let k be the maximal index of the sequence. By maximality of k and by Lemma 4.14, P (T k ) − S k = ∅. Therefore, P (T k ) = S k . Since S 0 = ∅ and by (v), we can prove by induction over k that for every path ξ through T k , there is some stage i < k such that ξ refines ρ i . Thus, by (iii) and by stability of the forcing relation under refinement, T k ξ q(τ i ) or T k ξ ¬q(τ i ). Therefore T k satisfies the statement of the lemma. The uniformity is inherited from the uniformity of Lemma 4.14.
Last, we prove the query lemma by iterating the previous lemma at every depth of the stage tree, to decide the queries on the partial paths.
Proof of the query lemma. Let T 0 be a stage tree of depth n and q : P P (T 0 ) → Query[U, G] be a function. Using Lemma 4.15, define a decreasing sequence of stage trees T 0 ≥ · · · ≥ T n of depth n such that for each i < n, (i) T i+1 is the trivial completion of T i+1 ↾i + 1 by T i .
(ii) T i+1 ξ q(τ ) or T i+1 ξ ¬q(τ ) for every path ξ through T i+1 ↾i + 1 refining some path τ through T 0 ↾i + 1. To do this, at stage i < n, apply Lemma 4.15 to T i with the query function r : P P (T i ) → Query[U, G] defined by r(ρ) = q(τ ) for each path ρ through T i ↾i + 1 refining some path τ through T 0 ↾i + 1. Since the forcing relation is stable by refinement, the stage tree T n satisfies the statement of the query lemma. The uniformity is again inherited from the uniformity of Lemma 4.15.
This completes the presentation of the framework. We will now define a module for the Erdős-Moser theorem. In section 6, we will see how to compose modules to obtain stronger separations.
The weakness of EM over ω-models
Now we have settled the domination framework, it suffices to implement the abstract module to obtain ω-structures which do not satisfy AMT. We have illustrated the notion of module by implementing one for COH. An immediate consequence is the existence of an ω-model of COH which is not a model of AMT. In this section, we shall extend this separation to the Erdős-Moser theorem. As noted before, every ω-model of EM which is not a model of AMT is also a model of COH. This section is devoted to the proof of the following theorem. At first sight, the forcing notion introduced in section 3 seems to have a direct mapping to the abstract notion of forcing defined in the domination framework. However, unlike cohesiveness where the module implementation was immediate, the Erdős-Moser theorem raises new difficulties: − The Erdős-Moser theorem is not known to admit a universal instance. We will therefore need to integrate the information about the instance in the notion of condition. Moreover, the init operator will have to choose accordingly some new instance of EM at every iteration level. We need to make init computable, but the collection of every infinite computable tournament functionals is not even computably enumerable. − The notion of EM condition introduced in section 3 contains a Π 0,R 1 property ensuring extensibility. Since the tournament R depends on the previous iteration which is being constructed, we have only access to a finite part of R. We need therefore to ensure that whatever the extension of the finite tournament is, the condition will be extendible. We shall address the above-mentioned problems one at a time in subsections 5.1 and 5.2.
Enumerating the infinite tournaments
In section 3, we were also confronted to the problem of enumerating all infinite tournaments and solved it by relativizing the construction to a low subuniform degree in order to obtain a low sequence of infinite tournaments containing at least every infinite computable tournament. We cannot apply the same trick to handle the construction of an ω-model of EM as solutions to some computable tournaments may bound new tournaments and so on. However, as we shall see, we can restrict ourselves to primitive recursive tournaments to generate an ω-model of EM.
Given a sequence of sets X 0 , X 1 , . . . , define M X to be the ω-structure whose second-order part is the Turing ideal generated by X, that is, There exists a uniformly computable sequence of infinite, primitive recursive tournament functionals T 0 , T 1 , . . . such that for every sequence of sets X 0 , X 1 , . . . such that X i is an infinite transitive subtournament of T Proof. As RCA 0 ⊢ SEM ∧ COH → EM, it suffices to prove that for every set X, (i) for every stable, infinite, X-computable tournament R, there exists an infinite X-p.r. tournament T such that every infinite T -transitive subtournament X-computes an infinite R-transitive subtournament. (ii) for every X-computable complete atomic theory T and every uniformly X-computable sequence of sets R, there exists an infinite X-p.r. tournament such that every infinite transitive subtournament X-computes either an R-cohesive set or an atomic model of T . (i) Fix a set X and a stable, infinite, X-computable tournament R. Letf : ω → 2 be the X ′ -computable function defined byf (x) = 0 if (∀ ∞ s)R(s, x) andf (x) = 1 if (∀ ∞ s)R(x, s). By Schoenfield's limit lemma [28], there exists an X-p.r. function g : ω 2 → 2 such that lim s g(x, s) = f (x) for every x ∈ ω. Considering the X-p.r. tournament T such that T (x, y) holds iff x < y and g(x, y) = 1 or x > y and g(x, y) = 0, every infinite T -transitive subtournament X-computes an infinite R-transitive subtournament.
(ii) Jockusch and Stephan proved in [16] that for every set X, and every uniformly Xcomputable sequence of sets R, every p-cohesive set relative to X computes an R-cohesive set. The author proved in [25] that for every X-computable complete atomic theory T , there exists an X ′ -computable coloring f : ω → ω such that every infinite set Y thin for f (i.e. such that f (Y ) = ω) X-computes an atomic model of T . He also proved that for every such X ′ -computable coloring f : ω → ω, there exists an infinite, X-p.r. tournament R such that every infinite transitive subtournament is either p-cohesive, or X-computes an infinite set thin for f . We can therefore fix this computable enumeration T 0 , T 1 , . . . of tournament functionals, and make init(n) return an empty condition paired with T n . Thus, taking at each iteration an infinite set satisfying one of the parts, we obtain an ω-model of EM.
The new Erdős-Moser conditions
Fix some primitive recursive tournament functional R. According to the analysis of the Erdős-Moser presented in section 3, we would like to define the forcing conditions to be tuples ( F , C) where (a) C is a non-empty Π 0,D 1 k-cover class of [t, +∞) for some k, t ∈ ω (b) F ν ∪ {x} is R D -transitive for every Z 0 ⊕ · · · ⊕ Z k−1 ∈ C, every x ∈ Z ν and each ν < k (c) Z ν is included in a minimal R D -interval of F ν for every Z 0 ⊕ · · · ⊕ Z k−1 ∈ C and each ν < k. However, at a finite stage, we have only access to a finite part of D, and therefore we cannot express the properties (a-c). Indeed, we may have made some choices about the F 's such that F ν ∪ {x} is not R D -transitive for every part ν, every D satisfying the previous iterations and cofinitely many x ∈ ω. We need therefore to choose the F 's carefully enough so that whatever the extension of the finite tournament to which we have access, we will be able to extend at least one of the F 's.
The initial condition ({∅}, {ω}) satisfies the properties (a-c) no matter what D is, since {ω} does not depend on D. Let us have a closer look at the question Q2 asked in section 3. For the sake of simplification, we will consider that the question is asked below the unique part of the initial condition. It therefore becomes: Q3: Is there a finite set E ⊆ ω such that for every 2-partition E 0 , E 1 of E, there exists an R D -transitive subset F 1 ⊆ E i for some i < 2 such that ϕ(D, F 1 ) holds?
Notice that this is a syntactic question since it depends on the purely formal variable D representing the effective join of the sets constructed in the previous iterations. Thanks to the usual query process, we are able to transform it into a concrete Σ 0 1 formula getting rid of the formal parameter D, and obtain some answer that the previous layers guarantee to hold for every set D satisfying the previous iterations.
If the answer is negative, then by compactness, for every set D satisfying the previous iterations, there is a 2-partition Z 0 ∪ Z 1 = ω such that for every i < 2 and every R D -transitive subset G ⊆ Z i , ϕ(D, G) does not hold. For every set D, the Π 0,D 1 class C of such 2-partitions Z 0 ⊕ Z 1 is therefore guaranted to be non-empty. Note again that since D is a syntactic variable, the class C is also syntactic, and purely described by finite means.
If the answer is positive, then we are given some finite set E ⊆ ω witnessing it. Moreover, we are guaranted that for every set D satisfying the previous iterations, for every 2-partition E 0 , E 1 of E, there exists an R D -transitive subset F 1 ⊆ E i for some i < 2 such that ϕ(D, F 1 ) holds. In we knew the set D, we would choose one "good" 2-partition E 0 , E 1 as we do in section 3. However, this choice depends on infinitely many bits of information of D. We will need therefore to try every 2-partition in parallel.
There is one more difficulty. With this formulation, we are not able to find the desired extension, since D is syntactic, and therefore we do not know how to identify the color i and the actual set F 1 given some 2-partition E 0 , E 1 . Thankfully, we can slightly modify the question to ask to provide the witness F 1 for each such a partition in the answer.
Q4: Is there a finite set E ⊆ ω and a finite function g such that for every 2- The question Q4 is equivalent to the question Q3, but provides a constructive witness g in the case of a positive answer as well. We can even formulate the question so that we know the relation R D over the set F 1 . Thus we are able to talk about minimal R D -intervals of F 1 . Now, we can extend the initial condition ({∅}, {ω}) into some condition ( F , C) as follows: For each 2-partition E 0 , E 1 of E, letting F 1 = g( E 0 , E 1 ), for every minimal R D -interval I, we create a part ν = E 0 , E 1 , I and set F ν = F 1 . Take some t ′ > max( F ) and let C be the Π 0,D 1 class of covers ν Z ν of [t ′ , +∞) such that for every part ν = E 0 , E 1 , I (b') F ν ∪ {x} is R D -transitive for every x ∈ Z ν (c') Z ν is included in the minimal R D -interval I Fix some set D satisfying the previous iterations. We claim that C is non-empty. Any element x ∈ [t ′ , +∞) induces a 2-partition g(x) = E 0 , E 1 of E by setting E 0 = {y ∈ E : R D (y, x)} and E 1 = {y ∈ E : R D (x, y)}. On the other hand, for every 2-partition E 0 , E 1 of E, we can define a partition of [t ′ , +∞) by setting is in C and witnesses the non-emptiness of C.
The problem of having access to only a finite part of the class C appears more critically when considering the question below some part ν of an arbitrary condition c = ( F , C). The immediate generalization of the question Q4 is the following.
Q5: For every cover X 0 ⊕ · · · ⊕ X k−1 ∈ C, is there a finite set E ⊆ X ν and a finite function g such that for every 2-partition E 0 , E 1 of E, g( E 0 , E 1 ) is a finite R D -transitive subset of some E j such that ϕ(D, F ν ∪ g( E 0 , E 1 )) holds?
As usual, although this question is formulated in a Π 0 2 manner, it can be turned into a Σ 0,D 1 query using a compactness argument.
Q5': Is there some r ∈ ω, a finite sequence of finite sets E 0 , . . . , E r−1 and a finite sequence of functions g 0 , . . . , g r−1 such that (1) for every X 0 ⊕ · · · ⊕ X k−1 ∈ C, there is some i < r such that E i ⊆ X ν (2) for every i < r and every 2- In the case of a negative answer, we can apply the standard procedure consisting in refining the Π 0,D 1 class C into some Π 0,D 1 class D forcing ϕ(D, G) not to hold on every part refining the part ν in c. The class D is non-empty since we can construct a member of it from a witness of failure of Q5. The problem appears when the answer is positive. We are given some finite sequence E 0 , . . . , E r−1 and a finite sequence of functions g 0 , . . . , g r−1 satisfying (i) and (ii). For every D, there is some X 0 ⊕ · · · ⊕ X k−1 ∈ C and some i < r such that E i ⊆ X ν , but this i may depend on D. We cannot choose some E i as we used to do in section 3.
Following our moto, if we are not able to make a choice, we will try every possible case in parallel. The idea is to define a condition d = ( E, D) and a refinement function f forking the part ν into various parts, each one representing a possible scenario. For every part µ of c which is different from ν, create a part µ in d and set E µ = F µ . For every i < r and every 2-partition E 0 , E 1 of E i , create a part µ = i, E 0 , E 1 in d refining ν and set E µ = F ν ∪ g i ( E 0 , E 1 ). Accordingly, let D be the Π 0,D 1 class of covers µ Y µ of [t, +∞) such that there is some i < r and some cover X 0 ⊕ · · · ⊕ X k−1 ∈ C satisfying first The class D f -refines C, but does not f -refine C [ν,E i ] for some fixed i < r. Because of this, the condition d does not extends the condition c in the sense of section 3. We shall therefore generalize the operator · → C [ν,·] to define it over tuples of sets.
Restriction of a cover class. Given some cover class (k, Y, C), some part ν of C and some r-tuple E 0 , . . . , E r−1 of finite sets, we denote by C [ν, E] the cover class (k + r − 1, Y, D) such that D is the collection of X 0 ⊕ · · · ⊕ X ν−1 ⊕ Z 0 ⊕ · · · ⊕ Z r−1 ⊕ X ν+1 ⊕ · · · ⊕ X k−1 such that X 0 ⊕ · · · ⊕ X k−1 ∈ D and there is some i < r such that E i ⊆ X ν , Z i = X ν and Z j = ∅ for every j = i. In particular, C [ν, E] refines C with some refinement function f which forks the part ν into r different parts. Such a function f is called the refinement function witnessing the restriction.
We need to define the notion of extension between conditions accordingly. A condition d = ( E, D) extends a condition c = ( F , C) (written d ≤ c) if there is a function f : parts(D) → parts(C) such that the following holds: (i) (E ν , dom(D)) Mathias extends (F f (ν) , dom(C)) for each ν ∈ parts(D) (ii) Every µ Y µ ∈ D f -refines some ν X ν ∈ C such that for each part µ of d, either Note that this notion of extension is coarser than the one defined in section 3. Unlike with the previous notion of extension, there may be from now on some part µ of d refining the part ν of c, such that (E µ , Y µ ) does not Mathias extend (F ν , X ν ) for some µ Y µ ∈ D and every ν X ν ∈ C, but in this case, we make (E µ , Y µ ) non-extendible by ensuring that Y µ = ∅.
Implementing the Erdős-Moser module
We are now ready to provide a concrete implementation of a module support and a module for EM. Define the tuple S EM = P, U, parts, init, sat as follows: P is the collection of all conditions ( F , C, R) where R is a primitive recursive tournament functional and (a) C is a non-empty Π 0,D 1 k-cover class of [t, +∞) for some k, t ∈ ω (b) F ν ∪ {x} is R D -transitive for every Z 0 ⊕ · · · ⊕ Z k−1 ∈ C, every x ∈ Z ν and each ν < k (c) Z ν is included in a minimal R D -interval of F ν for every Z 0 ⊕ · · · ⊕ Z k−1 ∈ C and each ν < k.
Once again, C is actually a Π 0,D 1 formula denoting a non-empty Π 0,D 1 class. A condition d = ( E, D, T ) extends c = ( F , C, R) (written d ≤ c) if R = T and there exists a function f : parts(D) → parts(C) such that the properties (i) and (ii) mentioned above hold.
Given some condition c = ( F , C, R), parts(c) = { c, ν : ν ∈ parts(C)}. Define U as c∈P parts(c), that is, the set of all pairs ( F , C, R), ν where ν ∈ parts(C). The operator init(n) returns the condition ({∅}, {ω}, R n ) where R n is the nth primitive recursive tournament functional. Last, define sat( c, ν ) to be the collection of all R D -transitive subtournaments satisfying the Mathias precondition (F ν , X ν ) where X ν is non-empty for some ν X ν ∈ C. The additional non-emptiness requirement of X ν in the definition of the sat operator enables us to "disable" some part by setting X ν = ∅. Without this requirement, the property (i) of a module support would not be satisfied. Moreover, since every cover class has an acceptable part, there is always one part ν in C such that sat( c, ν ) = ∅.
Lemma 5.3
The tuple S EM is a module support.
Proof. We must check that if d ≤ P c for some c, d ∈ P, then there is a function g : parts(d) → parts(c) such that sat(ν) ⊆ sat(g(ν)) for each ν ∈ parts(d). Let d = ( E, D, R) and c = ( F , C, R) be such that d ≤ P c. By definition, there is a function f : parts(D) → parts(C) satisfying the properties (i-ii). Let g : parts(d) → parts(c) be defined by g( d, ν ) = c, f (ν) . We claim that g is a refinement function witnessing d ≤ P c. Let G be any set in sat( d, ν ). We will prove that G ∈ sat( c, f (ν) ). The set G is an R D -transitive subtournament satisfying the Mathias condition (E ν , X ν ) where X ν = ∅ for some ν X ν ∈ D. By (ii), since X ν is non-empty, there is some . It suffices to show that (F ν , X ν ) Mathias extends (F f (ν) , Y f (ν) ) to deduce that G satisfies the Mathias condition (F f (ν) , Y f (ν) ) and finish the proof.
We next define an implementation of the module M EM = S EM , box, unbox, prog as follows. Given some condition c = ( F , C, R), some ν ∈ parts(C) and some Σ 0 1 formula ϕ(D, G), unbox( c, ν , ϕ) returns the Σ 0 1 formula ψ(D) which holds if there is a finite sequence of finite sets E 0 , . . . , E r−1 and a finite sequence of functions g 0 , . . . , g r−1 such that (1) for every X 0 ⊕ · · · ⊕ X k−1 ∈ C, there is some i < r such that E i ⊆ X ν (2) for every i < r and every 2-partition E 0 , E 1 of E i , g i ( E 0 , E 1 ) is a finite R D -transitive subset of some E j such that ϕ(D, F ν ∪ g i ( E 0 , E 1 )) holds. If the answer to ψ(D) is No , unbox( c, ν , No ) returns the tuple d, f, b where d = ( E, D, R) is a condition such that d ≤ f c and defined as follows. For every part µ = ν of c, create a part µ in d and set E µ = F µ . Furthermore, fork the part ν into two parts ν 0 and ν 1 in d and set E ν i = F ν for each i < 2. Define D to be the Π 0,D 1 class of all covers µ Y µ f -refining some cover ν X ν ∈ C and such that for every i < 2 and every finite Suppose now that the answer to ψ(D) is a = Yes, r, E 0 , . . . , E r−1 , f 0 , . . . , f r−1 , a ′ where a ′ is a function which on every i < r and every 2-partition E 0 , E 1 = i, returns an answer to ϕ(D, F ν ∪g i ( E 0 , E 1 )). The function unbox( c, ν , a) returns the tuple d, f, b where d is a condition such that d ≤ f c and whose definition has been described in subsection 5.2. The function b : parts(d) → Ans[D, G] returns on every part µ = i, E 0 , E 1 the tuple Yes, a ′ (i, E 0 , E 1 ) .
Last, given some condition c = ( F , C, R) and some ν ∈ parts(C), prog( c, ν ) is the query ϕ(D, G) = (∃n)[n ∈ G∧n > max(F ν )]. Note that we cannot force ¬ϕ(D, G) on every part c, ν , since every cover class has an acceptable part. Applying the query lemma infinitely many times on the progress operator ensures that if we take any path through the infinite tree of the acceptable parts, the resulting R D -transitive subtournament will be infinite. Proof. We need to ensure that given some part ν of some condition c = ( F , C, R) and some answer a to a Σ 0 1 formula ψ(D) = box( c, ν , ϕ) where ϕ(D, G) is a Σ 0 1 formula, unbox( c, ν , a) outputs a tuple d, f, b where d = ( E, D, R) is a condition such that d ≤ f c where f forks only part ν of c, and for every part µ of d such that f ( d, µ ) = c, ν , and every set G ∈ sat( d, µ ), b( d, µ ) is an answer to ϕ(D, G).
Suppose that a = No . By definition of sat( d, µ ) and by construction of d, G is R Dtransitive and satisfies the Mathias condition (E ν i , Y ν i ) for some i < 2 and some cover µ Y µ ∈ D. In particular, E ν i = F ν and Y ν i is such that for every finite R D -transitive set E ⊆ Y ν i , ϕ(D, F ν ∪ E) does not hold. In particular, taking E = G F ν , ϕ(D, G) does not hold.
Suppose now that a = Yes, r, E 0 , . . . , E r−1 , f 0 , . . . , f r−1 , a ′ where a ′ is a function which on every i < r and every 2-partition E 0 , E 1 = i, returns an answer to ϕ(D, F ν ∪ g i ( E 0 , E 1 )). By definition of sat( d, µ ) and by construction of d, G is R D -transitive and satisfies the Mathias
The separation
We have defined a module M EM for the Erdős-Moser theorem. In this subsection, we explain how we create an ω-model of EM which is not a model of AMT from the infinite sequence of stage trees constructed in subsection 4.3. Given the uniform enumeration R 0 , R 1 , . . . of all primitive recursive tournament functionals, we shall define an infinite sequence of sets X 0 , X 1 , . . . together with a ∆ 0 2 function f such that for every s, 1. X s+1 is an infinite, transitive subtournament of R X 0 ⊕···⊕Xs 2. f dominates every X 0 ⊕ · · · ⊕ X s -computable function.
By 2, any ∆ 0 2 approximationf of the function f is a computable instance of the escape property with no solution in M X , that is, such that no function in M X escapes f . By the computable equivalence between the escape property and the atomic model theorem (see subsection 1.3), M X |= AMT. By Lemma 5.2, M X |= EM ∧ COH.
Start with X 0 = ∅ and the ∆ 0 2 enumeration T 0 ≥ T 1 ≥ . . . of stage trees constructed in subsection 4.3, and let c 0 ≥ c 1 ≥ . . . be the sequence of their roots. The set U of their parts form an infinite, finitely branching tree, whose structure is given by the refinement functions. Moreover, by the construction of the sequence T 0 , T 1 , . . . , for every s, there is some part ν in c s+1 refining some part µ in c s and which forces prog(µ). Call such a part ν a progressing part. We may also consider that every part of c 0 is a progressing part, for the sake of uniformity. By the implementation of prog, if ν is a progressing part which refines some part µ, µ is also a progressing part. Therefore, the set the progressing parts forms an infinite subtree U 1 of U .
Let ν 0 , ν 1 , . . . be an infinite path through U 1 . Notice that sat(ν s ) = ∅. Indeed, if sat(ν s ) = ∅, then the part ν s is empty in C s , where c s = ( E s , C s ), and therefore we cannot find some progressing part ν s+1 refining ν s . Therefore, the set s sat(ν s ) is non-empty. Let X 1 ∈ s sat(ν s ). By definition of sat(ν s ), X 1 is a transitive subtournament of R X 0 . By definition of prog, for every s and every set G ∈ sat(ν s ), there is some n ∈ G such that n > s. Therefore, the set X 1 is infinite, so the property 1 is satisfied.
Repeat the procedure with the sequence of stage trees T ≥ . . . and so on. We obtain an infinite sequence of sets X 0 , X 1 , . . . satisfying the property 1. Let f be the ∆ 0 2 function which on input x, returns max(U x )+1 where U x is the finite set stated in the domination lemma (Lemma 4.9) for stage trees of depth x. Fix some Turing index e such that Φ X 0 ⊕···⊕X i e is total. By the domination lemma, for every x ≥ max(e, i), Φ X 0 ⊕···⊕X i e (x) ∈ U x < f (x). Therefore the function f dominates every X 0 ⊕ · · · ⊕ X i -computable function. This finishes the proof of Theorem 5.1.
Separating combined principles from AMT
The domination framework has two purposes. First, it emphasizes on the key elements of the construction and gets rid of the implementation technicalities by abstracting the main operations into operators. Second, it enables us to separate conjunctions of principles from AMT, using the ability to compose modules into a compound one. In this section, we will take advantage of the latter to prove that EM is not strong enough to prove AMT, even when allowing compactness arguments.
Theorem 6.1 There is an ω-model of EM ∧ COH ∧ WKL which is not a model of AMT.
In subsection 6.1, we will show how to compose multiple modules to obtain separations of conjunctions of principles from AMT. Then, in subsection 6.2, we will provide a module for WKL and will show how to choose properly the sequence of sets X 0 , X 1 , . . . to obtain an ω-model of WKL.
Composing modules
When building the second-order part I of an ω-model of a countable collection of principles P 0 , P 1 , . . . , we usually interleave the instances of the various P's so that each instance receives attention after a finite number of iterations. This is exactly what we will do when composing module supports S i = P i , U i , parts i , init i , sat i for P i for each i ∈ ω, in order to obtain a compound module support S = P, U, parts, init, sat for i∈N P i . The domain of the partial order P is obtained by taking the disjoint union of the partial orders P i . Therefore P = { c, i : i ∈ N ∧ c ∈ P i }. The order is defined accordingly: d, j ≤ P c, i if i = j and d ≤ P i c. Similarly, U = { ν, i : i < N ∧ ν ∈ U i }, parts( c, i ) = { ν, i : ν ∈ parts i (c)} and sat( ν, i ) = sat i (ν).
The key element of the composition is the definition of init(n), which will return init i (m) if n codes the pair (m, i). This way, infinitely many iterations are responsible for making I satisfy P i for each i ∈ N. The construction within the domination framework therefore follows the usual construction of a model satisfying two principles.
The property (i) in the definition of a module support for S inherits from the property (i) of S i for each i ∈ N. Indeed, if d, j ≤ P c, i , then j = i and d ≤ P i c. By the property (i) of M i , there is a function f : parts i (d) → parts i (c) such that sat i (ν) ⊆ sat i (f (ν)) for each ν ∈ parts i (d). Let g : parts( d, i ) → parts( c, i ) be defined by g( ν, i ) = f (ν), i . sat( ν, i ) = sat i (ν) ⊆ sat i (f (ν)) = sat(g( ν, i ).
Given a module M i = S i , box i , unbox i , prog i for P i for each i ∈ N, the definition of the compound module M = S, box, unbox, prog for i∈N P i does not contain any particular subtlety. Simply redirect box( ν, i , ϕ) to box i (ν, ϕ), unbox( ν, i , a) to unbox i (ν, a), and prog( ν, i ) to prog i (ν). Again, the properties of a module support for M inherit the properties for M i .
A module for WKL
Weak König's lemma states for every infinite binary tree the existence of an infinite path through it. The usual effective construction of such a path follows the classical proof of König's lemma: we build the path by finite approximations and consider the infinite subtree below the finite path we constructed so far. The difficulty consists of finding which ones, among the finite extensions candidates, induce an infinite subtree.
First note that we do not share the same concerns as for the Erdős-Moser theorem about the choice of an instance, since WKL admits a universal instance which is the tree whose paths are completions of Peano arithmetics. Moreover, this universal instance is a primitive recursive tree functional.
It is natural to choose the infinite, computable binary tree functionals as our forcing conditions. A condition (tree) U extends T if U D ⊆ T D . A set G satisfies the condition T if G is an infinite path through T D . Let us now see how we decide some Σ 0 1 query ϕ(D, G). Consider the following question: Q6: Is the set T D ∩ {σ ∈ 2 <ω : ¬ϕ(D, σ)} finite?
Let Γ D ϕ = {σ ∈ 2 <ω : ¬ϕ(D, σ)}. Whenever ϕ(D, τ ) holds and ρ τ , ϕ(D, ρ) holds, thus Γ D ϕ is a tree. At first sight, the question Q6 seems Σ 0,D 2 . However, T D ∩ Γ D ϕ is a tree, so the question can be formulated in a Σ 0,D 1 way as follows: Q6': Is there some length n such that T D ∩ Γ D ϕ has no string of length n? If the answer is negative, the extension T ∩ Γ ϕ is valid and forces ϕ(D, G) not to hold. If the answer is positive, the condition T already forces ϕ(D, G) to hold. Note that there is a hidden application of our moto "if you cannot choose, try every possibilities in parallel". Indeed, in many forcing arguments involving weak König's lemma, we ∅ ′ -choose an extensible string σ ∈ T such that ϕ(D, σ) holds and T D,[σ] is infinite. However, we meet the same problem as in the Erdős-Moser case, that is, we are unable to decide which of the σ's will be extensible into an infinite subtree. To be more precise, for every σ ∈ 2 n , there may be some D such that the set T D,[σ] is finite. By taking T as our extension forcing ϕ(D, G) to hold, we take in reality σ∈2 n ∩T T [σ] , that is, we take the union of the candidate extensions T [σ] . We are now ready to define a module support S WKL = P, U, parts, init, sat for WKL.
The set P is the set of conditions as defined above. Each condition has only one part which can be identified as the condition itself, therefore U = P. Accordingly, parts(T ) = {T }. The function init(n) always returns the universal instance of WKL. Last, sat(T ) is the collection of the infinite paths through T D .
Finally, we explain how to extract a solution to the universal instance of WKL below some set D, given the infinite decreasing sequence of stage trees constructed in subsection 4.3. Given the sequence T 0 ≥ T 1 ≥ . . . whose roots are T 0 ≥ T 1 ≥ . . . , there is no much choice since each condition T s has only one part, that is, the tree T s itself. By compactness, s T D s is infinite. Take any infinite path G through s T D s . This completes the proof of Theorem 6.1.
Beyond the atomic model theorem
We conclude this section by a discussion on the generality of the domination framework and its key properties.
Ramsey-type theorems satisfy one common core combinatorial property: given an instance I of a principle P, for every infinite set X ⊆ N, there is a solution of Y ⊆ X of I. This property makes Ramsey-type principles combinatorially weak. Indeed, Solovay [30] proved that the sets computable by every solution to a given instance I of P are precisely the hyperarithmetical ones. Moreover, Groszek and Slaman [10] proved that the hyperarithmetical sets are precisely the sets S admitting a modulus, namely, a function f such that every function dominating f computes S. These results put together can be interpreted as stating that the coding power of Ramsey-type principles comes from the sparsity of their solutions. If an instance can force its solutions H = {x 0 < x 1 < . . . } to have arbitrarily large gaps, then the principal function p H defined by p H (n) = x n will be fast-growing, and contain some computational power.
The strength of many principles in reverse mathematics can be explained in terms of the ability to ensure gaps in the solutions. ACA has instances whose solutions are everywheresparse, in that the principal function of the solutions dominated the modulus function of ∅ ′ . Some principles such as COH, AMT or FIP imply the existence of hyperimmune sets, which are sets sparse enough so that their principal function is not dominated by any computable function. These sets have infinitely many gaps, but their repartition cannot be controlled.
Another important aspect of the hole-based analysis is their definitional complexity. For example, AMT has the ability to ensures ∆ 0 2 gaps, which gives it more computational power than COH or EM which can only have ∆ 0 1 gaps. This is the main feature used by the domination framework to prove that COH ∧ EM does not imply AMT. This framework was designed to exploit this weakness of the principles, and is therefore relatively specific to the atomic model theorem. However, some weakenings of AMT, such as the finite intersection property, share some similar properties, in that they can also be purely characterized in terms of hyperimmunity properties. The author leaves open the following question: Question 6.4 Does COH imply FIP in RCA 0 ? | 2016-10-24T20:48:27.000Z | 2015-05-13T00:00:00.000 | {
"year": 2017,
"sha1": "25b07516ddb23ae475f3755269ae2b81f3279884",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.apal.2016.11.011",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "eef9325ed8957abc6a35ef2070518d8843ab9633",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
261770309 | pes2o/s2orc | v3-fos-license | Hyponatremia in a Patient With Vasodilatory Shock Due to Overdose of Antihypertensive Medications: A Case Report
Vasodilatory shock can be caused by septic shock, neurogenic shock, anaphylaxis, drugs, and toxins. Vasopressin is commonly used for the restoration of vasomotor tone in vasodilatory shock due to sepsis. This agent exerts its vasoconstrictive effect via smooth muscle V1 receptors and has antidiuretic activity via kidney V2 receptors. Stimulation of V2 receptors results in the integration of aquaporin 2 channels into the apical membrane of collecting ducts leading to free water reabsorption. This antidiuretic action of vasopressin predisposes to hyponatremia. Yet, the development of hyponatremia with the use of vasopressin in critically ill patients with sepsis is rare. A 75-year-old female presented after a suicidal attempt by ingestion of amlodipine and lisinopril. Despite adequate intravenous fluids administration, she remained hypotensive, requiring the initiation of vasopressors. She developed hyponatremia after initiation of vasopressin due to the absence of endotoxemia, and her serum sodium normalized once vasopressin was discontinued. We recommend monitoring for hyponatremia as a complication of vasopressin, especially in patients without sepsis.
Introduction
Hyponatremia is the most common electrolyte abnormality in hospitalized and critically ill patients, with a prevalence of 15% to 30% [1].The elderly population is at increased risk of developing hyponatremia due to the presence of various factors, including frequent prescription of drugs associated with hyponatremia, different diseases contributing to increased antidiuretic hormone, and other mechanisms, including "tea and toast" syndrome [2].Up to 40% to 70% of cases of hyponatremia are iatrogenic/hospital-acquired [3].Although total body salt content may be low, most hyponatremias arise from electrolyte-free water retention [4].In septic shock, endogenous vasopressin level decreases in later stages, thus vasopressin is usually used in addition to norepinephrine for its vasopressor effects.In addition to vasopressor effects, vasopressin also has antidiuretic effects.However, hyponatremia is not frequently encountered with the use of vasopressin due to downregulation of V2 receptors in septic shock.Here, we present a rare case of iatrogenic hyponatremia with exogenous vasopressin use for the management of hypotension from an intentional overdose of antihypertensive medications.
This article was previously presented as an oral presentation at the National Kidney Foundation of Illinois (NKFI) Citywide Grand Rounds in Chicago, IL, on September 15, 2022.
Case Presentation
A 75-year-old female with a past medical history of hypertension and untreated depression presented after a suicidal attempt by ingestion of amlodipine 5 mg and lisinopril 40 mg (60 tablets each).Her blood pressure (BP) was 99/53 mmHg on presentation, pulse rate was 62/minute, respiratory rate was 14/minute, temperature was 98.9°F, and oxygen saturation was 98% on room air.Serum chemistries are shown in Table 1 Of note, her serum creatinine one year before this admission was 1.0 mg/dL.Urine drug screen as well as plasma alcohol, salicylate, and acetaminophen levels were negative.She received 2 L of normal saline for hypotension along with glucagon 10 mg (5 mg x 2), and calcium gluconate 1 g as antidotes of calcium channel blocker.She continued to have hypotension despite these measures, so she was started on norepinephrine; vasopressin and epinephrine were subsequently added to maintain normal mean arterial pressures.She was noticed to have twitching in her facial muscles and right arm, for which she received levetiracetam.CT head was negative for any abnormality.Her serum creatinine improved to her baseline within 24 hours of presentation, and she was continued on norepinephrine and vasopressin for maintenance of her BP.Within 12 hours of normalization of serum creatinine, her serum sodium level decreased from 141 to 125, later reaching a nadir of 120.Serum and urine studies for workup of hyponatremia are shown in Table 2.
FIGURE 2: Serum sodium and urine output during hospitalization
The patient was on vasopressin from 07/07 (0505) to 07/11 (1600).Urine output in mL and sodium in mmol/L.
She was subsequently discharged to a psychiatric facility for continued care.
Discussion
Amlodipine is a dihydropyridine calcium antagonist that blocks the L-type calcium channels on vascular smooth muscle, thus reducing peripheral arterial resistance and BP.It has a large volume of distribution (21 L/kg) with a high degree of protein binding (98%).In patients with normal renal function, it is slowly cleared with a terminal elimination half-life of 40 to 50 hours [5].Calcium channel blockers (CCB) are used for the treatment of hypertension, angina pectoris, and other clinical conditions.The potential toxicity of these agents is often underappreciated.Almost 9500 cases of CCB intoxication were reported to poison centers in the United States during 2002, including intentional or unintentional overdose [6].Dihydropyridine intoxication results in arterial vasodilation and reflex tachycardia; however, there can also be myocardial depressant effects, resulting in bradycardia.Amlodipine induces nitric oxide-dependent vasodilatation in coronary and peripheral arteries and may inhibit the angiotensin-converting enzyme (ACE) itself [7].In conjunction with ACE inhibitors or angiotensin receptor blockers (ARBs), these complex effects might worsen toxicity [8].Management of patients with CCB intoxication depends on the severity of symptoms, with interventions including intravenous crystalloids, atropine, calcium salts, glucagon, high-dose insulin/glucose infusion, vasopressors, and intravenous lipid emulsion therapy [9,10].Orogastric lavage is effective only in patients who present within one to two hours of ingestion.
Vasodilatory shock is characterized by a failure of peripheral vasoconstriction in the face of low systemic arterial pressure [11], which, in our case, was caused by intentional intoxication of anti-hypertensive medications.In addition to volume resuscitation, vasopressors are usually required for the management of vasodilatory shock.Norepinephrine is the first-line agent for its potent vasoconstrictive effects as well as a modest increase in cardiac output [12].Vasopressin is a second-line agent in refractory vasodilatory shock to improve BP and reduce the dose of the first-line agent.Vasopressin is both a vasopressor and an antidiuretic hormone.It has vasoconstrictive effects via V1 receptors as well as antidiuretic effects via V2 receptors on collecting ducts [13].A fall in endogenous vasopressin levels may occur in the late stages of shock; exogenous vasopressin is thus frequently utilized in refractory shock [14].Although plasma hypertonicity serves as the primary stimulus for arginine vasopressin (AVP) release under normal conditions, the responsiveness of the osmoreceptor mechanisms appears to be significantly altered by modest changes in blood volume, indicating a close interrelationship between the two variables in the control of AVP release [15].The potential consequence of exogenous vasopressin administration is water intoxication with subsequent hyponatremia.Despite the common use of vasopressin in intensive care settings (the median hospital rate of vasopressin use for septic shock was 11.7 [16]), hyponatremia is a rare complication of vasopressin.In a randomized double-blind, vasopressin and septic shock trial (VASST), vasopressin was compared with norepinephrine in septic shock patients (382 patients randomized to receive norepinephrine and 396 patients randomized to receive vasopressin), only one patient developed hyponatremia (defined as <130 mEq/L in this trial) in each group (0.3%) [17].This can be explained by downregulation of vasopressin V2 receptors in septic shock.This has been shown in the peritoneal endotoxin-challenged rat model, where lipopolysaccharide (LPS) injection was associated with a decrease in V2 vasopressin receptors as well as a decrease in aquaporin 2 in the kidney [18].Coadministration of corticosteroids and catecholamines with vasopressin also reduces the risk of development of hyponatremia in septic shock patients [19].
We summarize that the absence of endotoxemia in our patient resulted in the development of hyponatremia.This phenomenon has been shown in experimental models in the past as well, where the administration of vasopressin (Pitressin) and water to normal subjects resulted in the development of hyponatremia [20].Another possible predisposing factor in our case was the presence of normal renal function at the time of its occurrence, an uncommon finding in septic patients.After the improvement of BP, discontinuation of vasopressin resulted in rapid diuresis and normalization of serum sodium levels in our patient without any additional interventions.
Conclusions
Vasopressin is one of the most commonly used vasopressors in patients with septic shock.Despite the antidiuretic effect of vasopressin, the development of hyponatremia is rare in septic patients due to endotoxemia-mediated downregulation of V2 receptors and likely resistance to its antidiuretic effects due to concomitant acute kidney injury (AKI).In our case, we hypothesize that the absence of endotoxemia and resolution of AKI led to free water retention and hyponatremia with vasopressin use and rapid free water excretion and normalization of serum sodium with cessation of vasopressin.This case highlights the importance of this rare but potentially serious side effect of vasopressin.
Component Result Reference range and units
.
TABLE 2 : Blood and urine tests for hyponatremia evaluation
change in serum sodium.Her BP improved and her vasopressin was tapered off on day five of admission, after which she had rapid diuresis of free water with resultant normalization of serum sodium (Figures1, 2). | 2023-09-14T15:26:23.128Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "85aea564729f92bcc24f43abdbeeec273f1f5589",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c146021ade651313b5d118a9873165b471ff0618",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55106102 | pes2o/s2orc | v3-fos-license | Atomistic Simulation of Undissociated 6 0 ̊ Basal Dislocation in Wurtzite GaN .
We have carried out computer atomistic simulations, based on an efficient density functional based tight binding method, to investigate the core configurations of the 60°basal dislocation in GaN wurtzite. Our energetic calculations, on the undissociated dislocation, demonstrate that the glide configuration with N polarity is the most energetically favorable over both the glide and the shuffle sets.
Introduction
Wurtzite GaN layers were initially grown along the [0001] direction, also called polar direction [1].This led to the fabrication of optoelectronic devices, based on heterostructures, which are strongly affected by spontaneous and piezoelectric polarization effects [2].These effects are at the origin of the occurrence of a high internal electrostatic field which increases the separation between electrons and holes and thus reducing the overlap of their wavefunctions [2].The latter causes a strong current dependence of the emission energy and a red shift of optical transitions as well as reduces the emission efficiency of optoelectronic devices [2].The polarizationrelated effects in wurtzite GaN heterostructures can be completely avoided by adopting growth on alternative orientations.Hence, various growth directions were explored.These were the non-polar directions: , and the semi-polar directions: , and [3].Then, GaN/AlGaN heterostructures grown along non-polar or semi-polar directions were proven to be free from polarization effects and thus demonstrate a clear improvement of their optical properties with respect to those elaborated along the polar direction.
The nature of threading dislocations contained in a wurtzite GaN layer is directly related to the direction of its growth.If the growth direction is [0001], i.e. the polar direction, the threading dislocations are perfect prismatic dislocations, which can be edge, screw or mixed [4].However, if the growth direction is , i.e. the non-polar direction, the threading dislocations can be perfect or partial basal dislocations [4].Perfect basal dislocations are screw and 60°-mixed, while partial basal dislocations are Shockley (edge, 30°-mixed), Frank and Frank-Shockley partials [5].
During the last decade, threading prismatic dislocations were extensively investigated in gallium nitride, at both experimental and theoretical levels [4].For these dislocations, models for their core structures were proposed and their impact on the electronic properties of GaN was nearly elucidated [6][7][8].The body of work dedicated to basal dislocations in GaN still insufficient compared to that to prismatic dislocations [4], while among basal dislocations, partials [9] were more investigated regarding the perfect ones [10,11].The perfect screw dislocation was investigated atomistically and the energetic hierarchy of its core configurations was established by Belabbas et al. [12].The perfect 60 °dislocation was studied in cubic GaN by Blumenau et al. [13] but unfortunately no theoretical report exists for the wurtzite phase.At the experimental level, the perfect 60 °dislocation was observed by using electron microscopy.By combining conventional transmission electron microscopy and cathodoluminescence measurements, Albrecht et al. [14] investigated the 60°dislocation in wurtzite GaN and analyzed its electronic and optical ac-tivities.They found it to be likely responsible for a parasitic luminescence around 2.9 eV.However, due the limited resolution of their used microscope, the previous authors were not able to establish if this observed behavior is that of a full or a dissociated dislocation.In a subsequent study, Niermann et al. [15] have observed a dissociated 60°dislocation by using high resolution transmission electron microscopy.The separation between the two resulting Shockley partials was found to be smaller than 2 nm.
In the present contribution, we have carried out computer atomistic simulations to investigate the core configurations of the 60°basal dislocation in GaN wurtzite.
Models and Simulation Details
The 60°basal dislocation has a mixed character (edge and screw).In the wurtzite crystal structure, this dislocation is perfect and has its line along the direction and its Burgers vector is which has a magnitude equal to a (a = 3.18Ǻ stands for the basal lattice vector of GaN).The 60°basal dislocation may have several core configurations, which depends on the position of its centre.If the latter is located between two narrowly spaced {0001} planes, called the glide set, the dislocation will have a glide configuration.However, if the centre of the dislocation is situated between two widely spaced {0001} planes, called the shuffle set, the dislocation will have a shuffle configuration.As gallium nitride is a compound semiconductor, a glide (or a shuffle) core configuration may exists in two different polarities: gallium or nitrogen.This depends on the nature of the ending atom at the additional half plane, which is at the origin of the edge component of the dislocation.
The 60°basal dislocation was modeled atomistically by using the so-called supercell-cluster hybrid model [6][7][8].The atoms at the model's lateral surface (Ga/N) have to be saturated by fractionally charged (1.25e/0.75e)pseudo-hydrogen atoms which allow getting rid of dangling bonds and their associated unwanted gap states [6][7][8].The supercell-cluster hybrids were at least doubled along direction in order to take into account any possible reconstruction along the dislocation line.The size of the models considered here is ranging from 750 to 1000 atoms and their lateral extension is typically about 26Ǻ.Although the lateral extension of the model is finite, periodic boundary conditions were applied laterally to the dislocation line while including a 50Ǻ of vacuum.The equilibrium atomic positions were obtained through a minimization procedure based on the conjugate gradient algorithm where energies and forces are evaluated by using the SCC-DFTB method [16].During this step all the atoms, including those at the model's lateral surfaces, were allowed to relax freely.The equilibrium is reached when the maximum force acting on each atom of the system is well below 0.0001a.u.
Results and Discussion
For the 60°basal dislocation, we have considered four core configurations: a shuffle configuration with nitrogen polarity (60°-S N ), a shuffle configuration with gallium polarity (60°-S Ga ), a glide configuration with a gallium polarity (60°-G Ga ) and a glide configuration with nitrogen polarity (60°-G N ).These core configurations are represented respectively in figures (1.a, 1.b, 1.c, 1.d).In the following we will present and discuss our results concerning the atomic description of the previous core configurations and their energetics.
Atomic Core Structure
The 60°-S N core configuration (Figure 1(a)) presents a structure with an asymmetric 8-atoms ring.This is different from the 8-atoms ring structure exhibited by the prismatic edge dislocation which processes mirror plane symmetry [17].All the atoms forming the core are fully coordinated except those of the column (1) which involves dangling bonds (Figure 1(a)).The most compressed bonds (-8.21%) are established between the atoms of columns ( 1) and ( 8), while the most stretched bonds (+13.33%) are established between the atoms of columns ( 5) and ( 6).The chemical bonds involved in the core present an angular dispersion ranging from 93°to 128 °.
The 60°-S Ga core configuration (Figure 1(b)) has, as in the previous one, a structure with an asymmetric 8atoms ring.However, while the 60°-S N configuration exhibits a single period structure, a complex reconstruction takes place in the 60°-S Ga configuration, leading to doubling its period along the dislocation line.This reconstruction consists in establishing alternated Ga-Ga bonds (2.81 Ǻ) between the atoms of columns (1) and ( 5), while occurring in column (6) dangling bonds within a 2a period.In this core configuration, the most compressed Ga-N bonds (-7.18%) are involved by the low coordinated atoms of column (1) and those of column (8).The most stretched Ga-N bonds (+17.44%) are established between the atoms of columns ( 5) and ( 6).The most extreme bond angles (79°and 154°) are recorded for the atoms of column (5).
The 60 °-G Ga core configuration (Figure 1(c)) exhibits a structure with an asymmetric 5/7-atoms ring.This core configuration includes only Ga-Ga bonds separating the 5-atoms and 7-atoms rings, which make it different from the symmetric core configuration of a prismatic edge dislocation where both Ga-Ga and N-N are separating the two atomic rings [17].In the configuration 60°-G Ga , the Ga-Ga bonds (2.32 Ǻ) are established between the atoms of columns (3) and ( 9).The latter contains under-coordinated Ga atoms.The most compressed Ga-N bonds (-5.64%) are involved between the (3) and ( 9) atomic columns, while the most stretched bonds (+10.77%) are established between the atoms of columns ( 5) and ( 6) and those of columns ( 6) and (7).The most extreme bond angles (90°and 138°) are recorded for the atoms of column (3).
The 60°-G N core configuration (Figure 1(d)) has a structure with an asymmetric 5/7-atoms ring, which contains some N-N bonds.The considerable difference in bond lengths between the N-N bonds (1.58 Ǻ), involved in this configuration, and the Ga-Ga bonds, involved in the previous configuration, makes the 60°-G N core configuration less spatially extended than the 60°-G Ga core configuration.In the configuration 60°-G N , the column ( 9) does contain under-coordinated N atoms.The most compressed Ga-N bonds (-6.67%) are established between the atoms of columns ( 8) and ( 9).The most stretched bonds (+7.18%) are involved by, in one hand, the atoms of columns ( 2) and (3) and, in the other hand, by the atoms of columns ( 5) and ( 6).The bond angles present a dispersion ranging from 92° to 134°.
Energetics
The energetic hierarchy of the four core configurations of the 60°basal dislocation was accessed through a combination of continuum elasticity theory and atomistic calculations based on the SCC-DFTB method.The total strain energy ( total ) associated with a dislocation can be represented as a sum of elastic ( ) and core ( ) contributions: within linear elasticity, the elastic strain energy per unit length stored in a cylinder of radius R around the dislocation is given by the relation [18]: where R c is the dislocation core radius.For a mixed type dislocation, the pre-logarithmic factor is related to both the edge and screw components of the Burgers vector of the dislocation (b e and b s respectively) and it is given, within anisotropic elasticity, by the relation [18]: In the case of a mixed basal dislocation, the energy factors e K and s K , associated respectively with the edge and the screw components, are given by the relations [18]: where are the elastic constants of the material.ij Within the SCC-DFTB method, one can define the excess energy of a single atom as its difference in energy in the system with presence of the defect and that in bulk material.Hence, the total strain energy ( total ) contained in a cylinder of radius R around the dislocation is evaluated by summing the excess of energy related to individual atoms belonging to this area.In order to determine the core parameters of the dislocation, i.e. core energy and core radius, we plotted the total strain energy ( total ) versus ln(R), for the four considered core configurations (Figure 2).These curves exhibit three distinct domains: a central linear region bordered by two non-linear ones.
The linear region represents the so-called elastic region while the non-linear region close to the centre of the dislocation represents the so-called core region.The appearance of a quick enhancement of the strain energy, in the second non-linear region, is attributed to surface effects.
C E E
Fitting the linear parts of the strain energy curves with equation ( 2) allowed us to determine the values of the pre-logarithmic factor (A fit ) which represents the slope of these curves.The obtained values are ranging from -1.3% to -10.4% (Table 1) with respect to theoretical value of A = 0.77eV/Ǻ, evaluated by using equation ( 3) and the experimental values of the elastic constants.
The core radius of a particular core configuration is defined as the value of the radius from which the strain energy curve cesses of being linear, when going to the centre of the dislocation.The core energy is defined as the value of the energy corresponding to the core radius [7].The obtained core energies and radii of the four core configurations of the 60°basal dislocation are summarized in Table 1.Then, the comparison of the core energies, evaluated at a common radius of 6Å, shows that within the glide set, the configuration with a nitrogen core (60°-G N ) is energetically favorable over the configuration with a gallium core (60°-G Ga ).However, the opposite is observed in the shuffle set, where the configuration with a gallium core (60°-S Ga ) has lower core energy than the configuration with a nitrogen core (60°-S N ).The core configuration 60°-G N was found to be the most energetically favorable over both the glide and the shuffle sets.Otherwise, our calculations show that the core energy difference of the glide configurations (0.53 eV/Å) is higher than that of the shuffle ones (0.11eV/Å).One can consider that the core energy of a given dislocation has two contributions: i)-a contribution due to heavily strained bonds (non-linear strain) and ii)-a second contribution which is due to dangling bonds.Based on the latter consideration, one can attempt to understand the obtained energetic hierarchy of the core configurations of the 60°dislocation.
As the two glide core configurations (60°-G N , 60°-G Ga ) exhibit comparable bond distortions, one may argue that the contribution which makes the difference in the establishment of their energetic hierarchy is that of dangling bonds.This implies that N-dangling bonds are less energetic than the Ga-dangling bonds.As this has to be also valid for the shuffle configurations, one may expect that the 60°-S N configuration is more energetically favorable than the 60°-S Ga configuration.However, the inversion of the energetic hierarchy revealed by our present calculations is directly related to the reconstruction that occurs at the 60°-S Ga core.Indeed, by a rearrangement of core atoms, the reconstruction allows getting rid of Ga-dangling bonds and then leads to a particular bonding state which is less energetic than N-dangling bonds.
Summary and Conclusions
By performing atomistic computer simulations, we have investigated the structure of the 60°basal dislocation core configurations as well as their energetics in hexagonal gallium nitride.Our calculations were carried out by using an efficient self-consistent based density functional theory tight binding method (SCC-DFTB).
For the undissociated 60 °dislocation, we have considered four core configurations; two belong to the glide set (60°-G N , 60°-G Ga ) and the two others belong to the shuffle set (60°-S N , 60°-S Ga ).Each of these core configurations was found to contain a row of undercoordinated atoms.These atomic columns are those defining the polarity (Ga/N) of the core as they are located at the end of the additional half plane which is at the origin of the edge component of the dislocation.Otherwise, all the core configurations exhibit single period structures but the 60°-S Ga one, where reconstructions occurring along the dislocation line lead to a structure with a double period.
Our energetic calculations demonstrate that within the glide set, the configuration with a nitrogen core (60°-G N ) is more energetically favorable than the configuration with a gallium core (60°-G Ga ).However, the opposite occurs in the shuffle set, where the configuration with a gallium core (60°-S Ga ) has lower core energy than the configuration with a nitrogen core (60°-S N ).The core configuration 60°-G N was found to be the most energetically favorable over both the glide and the shuffle sets.
Figure 1 .
Figure 1.Ball and stick models for relaxed core configurations of the mixed 60° basal dislocation, projected along the [1120] direction.Black balls represent gallium atoms and the white ones nitrogen atoms.(a): View of the 60°-S N core configuration.(b): View of the 60°-S Ga core configuration.(c): View of the 60°-G Ga core configuration.(d): View of the 60°-G N core configuration. | 2018-12-06T00:12:06.452Z | 2013-10-28T00:00:00.000 | {
"year": 2013,
"sha1": "e89d251548e49b9447f0f49e7c9f080e825f6488",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=38642",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e89d251548e49b9447f0f49e7c9f080e825f6488",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
118581831 | pes2o/s2orc | v3-fos-license | Microwave response of superconducting pnictides: extended $s_{\pm}$ scenario
We consider a two-band superconductor with relative phase $\pi $ between the two order parameters as a model for the superconducting state in ferropnictides. Within this model we calculate the microwave response and the NMR relaxation rate. The influence of intra- and interband impurity scattering beyond the Born and unitary limits is taken into account. We show that, depending on the scattering rate, various types of power law temperature dependencies of the magnetic field penetration depth and the NMR relaxation rate at low temperatures may take place.
Introduction
The recent discovery of Fe-based superconducting compounds [1] has stimulated the research of unconventional superconductors.One of the most important and still unsettled issues is the symmetry of the superconducting gap function.So far, different experiments produce conflicting results.As regards measurements of the penetration depth and the NMR relaxation rate, a power law behavior at low temperatures is now clearly established, which is a signature of unconventional order parameter symmetry.One possible scenario of a pairing symmetry state is a superconductor consisting of two relatively small semimetallic Fermi surfaces, separated by a finite wave vector Q with the relative phase π between the two order parameters.This is the so-called s ± model, first proposed in Ref. [2].In our previous work [3] we have shown that s ± model with strong impurity scattering can explain the power law behavior of the NMR relaxation rate.Therefore it is important to extend this formalism to address microwave properties of a two-band s ± superconductor, in particular the magnetic field penetration depth and real part of complex conductivity, since experimental data are now available for single crystals of Fe-based superconductors.
In this paper we calculate the microwave response and the NMR relaxation rate for a model s ± superconductor in which impurity scattering is treated beyond the Born limit and discuss the relevance to the experimental data for Fe-based superconducting compounds.
General expressions
We describe a multiband superconductor in the framework of the Eliashberg approach equations for the renormalization function Z i (ω) and complex order parameter φ i (ω).As shown in the first reference of [21], the BCS approach can give highly inaccurate results in the case of interband superconductivity due to the BCS neglect of mass renormalization.In addition there is evidence for strong-coupling in the pnictides, with many experimentally determined ∆/T c ratios substantially exceeding the BCS value of 1.76, and so we therefore employ the Eliashberg equations.
On the real frequency axis they have the following form, assuming an uniform ( bandindependent) impurity scattering (see e.g., Ref. [3,4,5]) where and n i (ω) is a partial density of states.γ = 2cσ/πN(0) is the normal-state scattering rate, N(0) is the total density of states (i.e.summed over both bands) at the Fermi level, c is the impurity concentration, and σ = 1+[πN(0)v] 2 is the impurity strength (σ → 0 corresponds to the Born limit, while σ = 1 to the unitary one).The kernels K ∆,Z i j (z, ω) describe the electron-boson interaction and have forms where the spin-fluctuation coupling function is Bi j (Ω) = B i j (Ω) == λ i j πωΩ s f /(Ω 2 s f + ω 2 ) for the equation for φ, and B i j (Ω) for the equation for Z.Here λ i j is the coupling constant pairing band i with band j and Ω SF is the spin fluctuation frequency.Note that all retarded interactions enter the equations for the renormalization factor Z with a positive sign.
We note that the implementation of the band-independent impurity scattering is contained in the second term on the right-hand side of Eq. 1, where the γ is applied to both bands (albeit with a relative minus sign in the first equation due to the order parameter sign change between bands).We have chosen such a band-independent scattering for several reasons, including consistency with the previously published work and to avoid a proliferation of parameter choices.However, recent work of Senga and Kontani [6] suggests that this assumption is justified on an experimental basis.Their Fig. 4 shows that only γ inter /γ intra between 0.9 and 1 is consistent with the several sets of nuclear spin relaxation rate T 1 1 data showing T 2.5 − T 3.0 behavior over a very large temperature range.The theoretical rationale for such a comparatively large interband scattering rate remains unclear, but can be plausibly related to the inherent disorder in these systems, with the dopant atoms themselves acting as scattering centers.
The microwave conductivity in the London (local, q ≡ 0) limit is given by where Π i (ω) is an analytical continuation to the real frequency axis of the polarization operator (see, e.g.Refs.[7], [8], [9], [10], [11]) and and the index R(A) corresponds to the retarded (advanced) brunch of the complex function F R(A) = ReF ± iImF ( the band index i is omitted), and ω = Z i (ω)ω. Here is the plasma frequency in different directions.For the dirty case the low frequency limits of expressions 2 and 3 can be reduced to the strong coupling generalization of the famous Mattis-Bardeen expressions [12] where is a contribution to the static conductivity from i−th band.Note that in the London limit there are no cross-terms connected two bands.
An important characteristic of the superconducting state is the penetration depth of the magnetic field λ L,αβ in the local (London) limit, which is related to the imaginary part of the optical conductivity by where α, β denote again Cartesian coordinates and c is the velocity of light.If we neglect strong-coupling effects (or, more generally, Fermi-liquid effects) then for a clean uniform superconductor at T = 0 we have the relation λ L,αβ = c/ω αβ pl .Impurities and interaction effects drastically enhance the penetration depth, and it is suitable to introduce a so called 'superfluid plasma frequency' ω s f pl,αβ by the relation ω s f pl,αβ = c/λ L,αβ .It has been often mentioned that this function corresponds to the charge density of the superfluid condensate, but we would like to point out that this is only the case for noninteracting clean systems at T = 0.
In the two-band model we have the standard expression (neglecting vertex corrections) where ω(n) and ∆(n) are the solutions of Eq. 1 continued to the imaginary (Matsubara) frequencies ( ∆i The calculations along these formulas can be thus presented in form of the effective superfluid plasma frequency, ω s f pl .For the NMR relaxation rate, following [13], we can write down the following general expressions. where χ ± (q, ω) is an analytical continuation to the real axis of the Fourier transform of the correlator averaged over the impurity ensemble.Here S ± (r, −iτ) = exp(Hτ)S ± (r) exp(−Hτ) where H is the electron Hamiltonian, τ denotes imaginary time, and As a result we have Here This expression contains the cross-term in contrast to the microwave conductivity.In this paper, in the T 1 1 calculation only these cross terms are used to emphasize the interband character of the superconductivity, as it is these cross terms that are most enhanced by the nearly antiferromagnetic state within a more detailed RPA approximation.For a single band system the full expression is proportional to Eq.4 when σ dc 1 → ∞ (Ref.[14]), but in multiband systems 1/T 1 T and σ 1 (ω → 0) can behave differently.
Results and discussion
It is well known that pair-breaking impurity scattering can induce substantial sub-gap density of states, which can produce power-law low temperature behavior in a whole host of thermodynamic quantities, such as specific heat, London penetration depth, nuclear spin relaxation rate, and even optical conductivity.Such behavior has been well-studied in the two canonical limits of weak (Born) scattering and strong (unitary) scattering [15] , but the intermediate regime has received almost no attention.In addition, with the advent of the multiband superconductivity in MgB 2 and the apparent multiband, primarily interband superconductivity in the pnictides, comes a need for further study of the intermediate regime in an interband case.Recent studies [16,19,17] have addressed the effects of impurities in the pnictides, but only in the Born or unitary limits.Here we study the important and likely more realistic intermediate regime, with σ, effectively the scattering strength is varied in the range from σ = 0 corresponding to the Born limit to σ = 1 corresponding to the unitary limit.As stated earlier, for all calculations the impurity scattering rate γ intra = γ inter = 0.8∆ 0 .
We will now illustrate the above discussion using specific numerical models.First, we present numerical solutions of the Eliashberg equations using the spin-fluctuation model for the spectral function of the intermediate boson: B i j (ω) = λ i j πωΩ s f /(Ω 2 s f + ω 2 ), with the parameters Ω s f = 25meV, λ 11 = λ 22 = 0.5, and λ 12 = λ 21 = −2.The rather large coupling constants are an attempt to model the rather large experimentally observed ratio ∆/T c .This set gives a reasonable value for T c ≃ 26.7K.A similar model was used in Ref. [20] to describe optical properties of ferropnictides.This model was also used in [3] and for consistency is used here.As stated earlier, we further assume that each surface features the same gap [21], and that the intraband impurity scattering rate and interband scattering rate are both equal to 0.8∆ 0 , where ∆ 0 is the low-temperature limiting value of the superconducting gap ∆.As in [3], we have chosen a relatively large impurity scattering, which is to be expected considering the early state of pnictide sample preparation and the limited availability of large single crystals.
We begin with the density of states, shown below in Figure 1.Several effects are apparent.Firstly, for all three σ values the substantial peak usually present at ω = ∆ 0 (about 6 meV here) is substantially truncated, with much spectral weight transferred below the gap.However, the detailed sub-gap behavior depends radically upon the scattering strength σ.The near-Born case σ = 0.1 still retains a small minigap of approximately 1.5 meV, which will lead to exponentially activated behavior below about 4 Kelvin.Although some data has shown evidence for such exponentially activated behavior, there is also significant data showing power-law behavior.The intermediate case σ = 0.4 shows a monotonically increasing density of states and essentially no minigap, leading to power-law behavior, as proposed in [3].Finally, the near-unitary case σ = 0.8 also shows a monotonically increasing density of states, but is nearly constant at low energy.We will see that such behavior leads to a quadratic temperature dependence of the penetration depth, even without the assumption of the strict unitary limit.Gross et.al. some time ago noted [18] in a different context that T 2 behavior does not require the unitary limit.We note parenthetically that the behavior depicted depends rather strongly upon the large value of impurity scattering assumed; the first two cases will yield more exponentially activated behavior if the scattering rate is much less strong, while the near-unitary case can potentially [4] lead to a non-monotonic density of states.
In Figure 2 is shown the inverted squared London penetration depth 1/λ 2 (T ), the so-called superfluid density for several cases as indicated in the figure.In all cases the temperature dependence of 1/λ 2 (T ) is different from the standard two-fluid (Gorter-Casimir) 4 , that is similar to the BCS result.Due to the sign change between gaps, the interband component of the scattering matrix is strongly pairbreaking, analogously to magnetic scattering in s-wave superconductors.As a result, the superfluid density shows near-exponential character at low temperature in the near-Born case (σ = 0.1), while the other two cases (σ = 0.4 and 0.8) exhibit power-law behavior at low T, with the actual power varying between 2 and 3.
A more detailed view of the low-temperature λ(T ) power law behavior is presented in Figure 3, which shows ∆λ(T )/λ T =0 for the same three cases.We see that the near-Born limit case (σ = 0.1) approaches a T 4 behavior, reminiscent of a two fluid model, while the nearunitary case shows a fairly robust T 2 behavior and the intermediate case falls between these two limits, as one would naively expect.Experimental data available so far [23,24,25] are consistent with either T 2 , or T 4 or exponential (gapped) behavior.Within our model, both results can be explained by proper choice of the impurity scattering rate.It is interesting to note that the T 2 dependence we obtain corresponds to strongly gapless regime.Similar results were obtained recently in Ref. [19] but in the Born limit only.
Figure 4 shows the calculated real part of the microwave conductivity for the three cases above.The microwave conductivity( Fig. 4) σ 1 (T ) does not show the coherence peak near T c .The suppression is connected with strong-coupling effects (see, [22]).Below T c the behavior of the σ 1 (T ) is determined by the filling of the impurity induced states below ∆.Qualitatively it is similar to the temperature dependence of the NMR relaxation rate ( see Fig. 5), but in the latter case the Hebel-Slichter peak is additionally reduced for s ± model by the different kind of the coherence factor.Almost all of the non-canonical BCS behavior derives from the interband component of the scattering matrix, which results in near constant behavior at low T for the near-unitary case, as might be expected from the form of Equation 4, in which a squared density of states enters.The intermediate case shows power law behavior as well, with the precise exponent not extracted.
Finally we turn in Figure 5 to the nuclear spin relaxation rate T −1 1 for the same three σ scenarios.Note also that following convention we have plotted (T 1 T ) −1 rather than T −1 1 , and all power-law references here mean (T T 1 ) −1 .T 1 has been a source of substantial controversy in the pnictides due to the existence of several data-sets [26,27,28,29] showing near-T 2 behavior throughout nearly the entire temperature range, although there now exist data [30] deviating from this behavior.Several things are apparent from the plot: first of all, the near-Born limit case shows power law behavior (1/T 1 T ∼ T 3 ) throughout nearly the entire temperature range below T c , although it will ultimately revert to exponentially activated behavior at the lowest temperatures.Substantial impurity scattering in the Born limit can thus mimic much of the behavior commonly ascribed to nodes, as was noted in [17,19].
The intermediate case shows an approximate T 1.5 behavior, as was described in [3], which is largely driven by the monotonic density of states presented in Figure 1, where the same parameters are chosen.Korringa behavior results in the near-unitary limit, as is again a direct consequence of the corresponding behavior of the density of states in Figure 1, but does not result in either of the first two cases unless the scattering rate γ is increased significantly beyond 0.8∆ 0 .
It should now be clear that impurity scattering in various strengths (i.e, σ), if sufficient impurity concentrations are present, can produce a wide variety of power-law behaviors in many thermodynamic quantities, even in the near-Born limit.In the s ± state, interband impurities are clearly much more effective in creating such behavior.This has implications for the ongoing lively debate about pairing symmetry, with significant numbers of proposals for nodal superconductivity in the pnictides and some experimental evidence for such behavior.
In conclusion, we have calculated the microwave response and the NMR relaxation rate for a superconductor in s ± symmetry state by solving Eliashberg equations with a model spectrum and taking into account impurity scattering beyond the Born limit.We show that the T 2 temperature behavior of the penetration depth and the NMR relaxation rate at low temperatures can be reproduced in this model.We have also demonstrated the dramatic effect of the impurity scattering on the real part of the microwave conductivity, which in particular results in near constant behavior at low T for the near-unitary case.
2 Figure 1 .
Figure 1.(color online) The quasiparticle density of states for the three indicated cases.The near-Born case σ = 0.1 retains a small gap, while the intermediate case shows a monotonic DOS and the near-unitary is gapless.
Figure 2 . 4 Figure 3 .
Figure 2. (color online)The inverse squared penetration depth.The near-Born limit approaches the BCS "two-fluid" calculation (∝ 1 − T 4 ) at low temperatures, mimicking exponential behavior, while the other two cases show power-law behavior, as in Fig.3.
1 Figure 4 . 2 Figure 5 .
Figure 4. (color online) The real part of the microwave conductivity.Note the substantial increase with scattering strength at low temperature. | 2009-08-14T21:29:46.000Z | 2009-04-07T00:00:00.000 | {
"year": 2009,
"sha1": "878205e2847c160ae779cd485a743e208d42fbb9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/11/7/075012",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "878205e2847c160ae779cd485a743e208d42fbb9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
73471955 | pes2o/s2orc | v3-fos-license | Viral genome integration of canine papillomavirus 16
Papillomaviruses infect humans and animals, most often causing benign proliferations on skin or mucosal surfaces. Rarely, these infections persist and progress to cancer. In humans, this transformation most often occurs with high-risk papillomaviruses, where viral integration is a critical event in carcinogenesis. The first aim of this study was to sequence the viral genome of canine papillomavirus (CPV) 16 from a pigmented viral plaque that progressed to metastatic squamous cell carcinoma in a dog. The second aim was to characterize multiple viral genomic deletions and translocations as well as host integration sites. The full viral genome was identified using a combination of PCR and high throughput sequencing. CPV16 is most closely related to chipapillomaviruses CPV4, CPV9, and CPV12 and we propose CPV16 be classified as a chipapillomavirus. Assembly of the full viral genome enabled identification of deletion of portions of the E1 and E2/E4 genes and two viral translocations within the squamous cell carcinoma. Genome walking was performed which identified four sites of viral integration into the host genome. This is the first description of integration of a canine papillomavirus into the host genome, raising the possibility that CPV16 may be a potential canine high-risk papillomavirus type.
Introduction
Papillomaviruses are circular double-stranded DNA viruses approximately 8 kb in length that are present in humans and many animal species [1,2]. These viruses are host and site specific [1,2]. They infect keratinocytes at either mucosal or cutaneous sites and most often cause benign proliferations, such as papillomas or plaques [1,2]. Typically, these lesions regress, although rarely they can persist and progress to cancer [2][3][4]. Human mucosal papillomaviruses are known to cause essentially all cases of cervical cancer [4]. They are divided into the low-risk and high-risk types, with the high-risk types associated with a higher risk of cancer development [2]. With human high-risk mucosal papillomaviruses, such as human papillomavirus (HPV) 16 and 18, viral integration into the host genome is a critical event in carcinogenesis, although the underlying mechanism is not entirely clear [3,5]. Unlike human mucosal high-risk papillomaviruses with a well-established causal association with cancer, human cutaneous papillomaviruses are only rarely associated with cancer. When this does occur, it is most often in patients with specific immunodeficiencies and many times in association with ultraviolet light exposure [6]. Integration of the viral genome has not been demonstrated for these cutaneous HPVs.
The aims of this present study were to sequence the viral genome of canine papillomavirus (CPV) 16 and to characterize viral genomic translocations, deletions, and integration sites into the host genome. The clinical features of this case have been described previously and were presented alongside its biological male offspring [24]. Both dogs developed metastatic squamous cell carcinoma in association with a canine papillomavirus. The male offspring developed cancer in association with CPV12 and the dog in the present case with CPV16 [24]. The CPV16 genome has also been described previously in a genome announcement based upon this case [13].
Routine surgical biopsies were obtained from a 13 year-old Basenji dog that developed multiple cutaneous pigmented viral plaques, characterized histologically by epithelial hyperplasia and hyperkeratosis (Supplemental Figure 1 A) (clinical and pathological data presented in Luff et al., 2016) [24]. Within one plaque there was progression to invasive squamous cell carcinoma (SCC) (Supplemental Figure 1B). Additional surgical biopsies were obtained one year later at the site of the previously diagnosed squamous cell carcinoma as well as an enlarged draining lymph node. Biopsies confirmed regrowth of the squamous cell carcinoma as well as metastasis to the draining lymph node (Supplemental Figure 1 C). A fresh sample of the squamous cell carcinoma was saved frozen and stored at −80°C. Histologic evaluation of the samples was performed by two board-certified veterinary pathologists (PR and JL).
Initial papillomavirus PCR
Genomic DNA was extracted from two 25 µm scrolls cut from the formalin fixed paraffin embedded (FFPE) tissue samples using a commercially available kit following manufacturer's recommended protocol (DNeasy tissue kit, Qiagen Inc., Valencia, CA). PCR was performed using a degenerate consensus primer set, CanPV/ FAP64 that has previously been used to identify multiple canine papillomavirus types [28]. PCR was performed using 100 ng of extracted DNA as previously described [28]. Additional sets of degenerate consensus primers (CPV sets 1-5, Supplemental Table 1) were designed using Primer3 design software (http://primer3.sourceforge.net). These primer sets were designed based upon conserved regions of canine papillomavirus genomes identified using nucleotide alignments of the following CPVs with their corresponding GenBank accession numbers in parentheses: CPV3 (NC_ 008297), CPV4 (NC_010226), CPV5 (FJ492743), CPV9 (NC_016074), CPV10 (NC_016075), CPV11 (JF800658), CPV12 (JQ754321). These additional primers aimed to amplify sequences from the remaining papillomavirus genome. PCR reaction conditions are as follows. Samples of genomic DNA were diluted with distilled water to yield a concentration of 10 ng/µl. The DNA sample (100 ng) was then amplified by PCR in a reaction mixture containing 10 mM Tris-HCl, 1.5 mM MgCl 2 , 200uM each (800 µM) deoxynucleoside triphasphate (dNTP), 1 unit (0.2 µl) Taq polymerase (HotStar Taq DNA polymerase, Qiagen, Valencia, CA), and approximately 2 µM of each primer for a total reaction volume of 50 µl. All PCR reactions were performed on an Applied Biosystems GeneAmp PCR system 2700 thermocycler (Foster City, CA). An initial activation step of 95°C for 10 min was followed by 50 cycles of 1) 1 min denaturation at 95°C 2) 1 min annealing at 50°C, and 3) 2 min elongation at 72°C. There was a hold for 7 min at 72°C and a final hold at 4°C.
All PCR products were electrophoresed through a 1% agarose gel and visualized with a DNA stain (GelRed, Phenix Research Products, Candler, NC). PCR products were cut from the gel and purified using a commercially available kit (Promega Wizard SV Gel and PCR Clean-Up System, Promega Corp., Madison, WI). Purified PCR products were submitted to a routine sequencing laboratory (Eton Bioscience, San Diego, CA) and the resulting nucleotide sequences were analyzed for sequence similarity to known papillomavirus types using the BLAST tool of the National Center for Biotechnology Information (NCBI).
High throughput sequencing
High throughput sequencing (HTS) using the Illumina, Hiseq. 2500 platform (UC Davis Genome Center, Davis, CA) was performed on extracted DNA (DNeasy Blood and Tissue Kit, Qiagen, Inc.) from the fresh frozen sample of squamous cell carcinoma. Raw reads were trimmed to remove the adaptor contamination and low-quality sequences using Scythe and Sickle (https://github.com/ucdavis-bioinformatics), and were aligned to the Canis familiaris genome using BWA's short read aligner [29]. Read pairs that aligned to the dog genome or to phiX were set aside. The remaining reads were run through the PRICE assembler seeded with two fragments (315 bp and 494 bp) of viral genome that were amplified and sequenced using degenerate primers above [30].
Overlapping PCR
Additional overlapping sets of primers (ConSet1-5, Supplemental Table 1) were designed using Primer3 design software and included specific primers designed from the HTS sequence (ConSet1 For and ConSet5 Rev) and new degenerate consensus primers designed based upon nucleotide alignments of the following CPVs with their corresponding GenBank accession numbers in parentheses: CPV3 (NC_ 008297), CPV4 (NC_010226), CPV5 (FJ492743), CPV9 (NC_016074), CPV10 (NC_016075), CPV11 (JF800658), CPV12 (JQ754321). PCR was performed as described above using 100 ng of extracted DNA from the FFPE pigmented viral plaque samples with the exception that only 45 cycles were used in the cycling conditions. One set of primers (ConSet #2) produced a product with a novel sequence 279 basepair (bp) in length. Two additional overlapping sets of specific primers (ChanaSet 7 and ChanaSet MM, Supplemental Table 1) were then generated using this new sequence fragment and the HTS sequence. PCR was performed as described above with the following changes: 1.5 mM MgCl 2 was reduced to 0.5 mM and only 100 uM each (400 µM) deoxynucleoside triphasphate (dNTP) were used; 45 cycles were run with an annealing temperature of 57°C.
Additional PCR and nucleotide sequencing was performed using multiple sets of specific primers for CPV16 using DNA extracted from FFPE pigmented plaques and the fresh SCC sample. These primer sets (listed Supplemental Table 2) included ChanaSet 7 (above), ChanaSet 9, and ChanaSet 11. PCR was performed as described above for ChanaSet 7 with the exception that only 50 µM each (20 µM) deoxynucleoside triphasphate (dNTP) were used and only 40 cycles were run at an annealing temperature of 58°C. PCR using the primer set ChanaSet 14 (Supplemental Table 2) was run using genomic DNA extracted from the fresh SCC sample and viral plaque sample and reaction conditions as stated above for ChanaSet 9 with the exception that MgCl 2 was not added and 45 cycles were run.
All PCR products were electrophoresed through a 1% agarose gel, purified, and sequenced as described above. The resulting nucleotide sequences were analyzed using BLAST. Vector NTI Advance 10 sequence analysis software (Invitrogen, Carlsbad, CA) was used to assemble the sequence contigs containing high-quality trace files.
Genome walking
Genome walking was performed using the Universal GenomeWalker kit (Clontech Laboratories, Mountain View, CA) following manufacturer's recommended protocols. Initially, the fresh SCC tissue sample was processed using the NucleoSpin Tissue Genomic DNA purification kit according to the manual. The isolated DNA was then digested overnight at 37°C with the restriction enzymes Dral, EcoRV, PvuII, and Stul in separate reactions. The digestion reactions were then purified using the Nucleospin gel and PCR clean up kit, following the protocol in genome walker manual. The genome walker adaptors were then ligated to the digested libraries in a reaction mixture containing 4.8 µl digested purified DNA, 1.9 µl genome walker adaptor, 0.8 µl 10X ligation buffer, and 0.5 µl T4 DNA ligase. The mixture was incubated overnight at 16°C followed by 5 min at 70°C to stop the reaction. A PCR reaction was then performed using the adaptor primer and gene specific primers (Supplemental Table 3). Omission of the DNA template served as a negative control. The positive control library provided in the kit with the associated primer served as a positive control. The PCR reaction contained 19.5 µl water, 2.5 µl 10X Advantage 2 PCR buffer, 0.5 µl dNTP (10 mM each), 0.5 µl each primer (10 μM), 0.5 µl Advantage 2 polymerase mix, and 1 µl of each DNA library. PCR reactions were performed on an Applied Biosystems GeneAmp PCR system 2700 thermocycler. An initial 7 cycles of 1) 94°C for 25 s 2) 72°C for 3 min were followed by 32 cycles of 1) 94°C for 25 s 2) 67°C for 3 min, with a final 7 min at 67°C. Secondary PCR was performed using 1 µl of the PCR reaction diluted in 49 µl water with the adaptor primer 2 and a gene specific primer in the combinations listed in Supplemental Table 3. The secondary PCR reaction mixture was as described above for the primary PCR reaction. Secondary PCR reaction conditions included an initial 5 cycles of 1) 94°C for 25 s 2) 72°C for 3 min followed by 20 cycles of 1) 94°C for 25 s 2) 67°C for 3 min with a final 7 min at 67°C.
All PCR products were electrophoresed through a 1% agarose gel and visualized with a DNA stain. PCR products were cut from the gel and purified using a commercially available kit (Promega Wizard SV Gel and PCR Clean-Up System). Purified PCR products were submitted to a routine sequencing laboratory (Eton BioSciences) as well as routinely cloned into the Topo-TA cloning kit (ThermoFisher Scientific, Waltham, MA) following their recommended protocols. Resulting clones were purified using a miniprep kit (QIAprep Spin Miniprep Kit, Qiagen, Germantown, MD) following manufacturer's recommended protocols and submitted for sequencing to a routine laboratory (Eton BioSciences). The resulting nucleotides sequences were analyzed using the BLAST tool of the National Center for Biotechnology Information (NCBI) and the Canis familiaris genome (CanFam 3.1).
Viral mRNA expression
Real time PCR was performed to measure the viral copies of CPVs and the expression level of viral mRNA using extracted total DNA and extracted total RNA from the FFPE tissue samples of the viral plaque (early lesion), SCC (late lesion), and the lymph node containing metastatic SCC (late lesion). Total DNA was extracted as described above, respectively. Total RNA was extracted using a commercially available kit following manufacturer's recommended protocol (E.Z.N.A. FFPE RNA kit, Omega, Norcross, GA) and cDNA synthesis carried out on 1000 ng of purified RNA using a commercially available kit (QuantiTect RT kit, Qiagen, Valencia, CA). The resulting 20 µl of cDNA was further diluted with 180 µl water. A standard curve was generated using a dilution series of purified PCR products for each primer set. Primer sets for CPV16 E7 and the reference gene RPL13A (GenBank accession number NM_001313766) (Supplemental Table 4) were designed using Primer3 design software. Primers for GAPDH, also listed in Supplemental Table 4, have been previously published [31]. Real time PCR was performed in a reaction mixture containing 12.5 µl of Sybr green (QuantiTect Sybr green PCR kit, Qiagen), 0.5 µl of each primer (diluted to 10 µM), 1.5 µl water, and either 10 µl (concentration 10 ng/ µl) total DNA, 10 µl diluted cDNA, or 10 µl of diluted purified PCR product (standard curve) for a total reaction volume of 25 µl. Real time PCR was run on the Roche LightCycler 480 (Roche Molecular Systems, Inc., Brighton, MA). An initial activation step of 95°C for 13.5 min was followed by 50 cycles of 1) 10 s denaturation at 95°C and 2) 1 min annealing at 57°C. A melt curve was performed at the conclusion of the reaction. Analysis was carried out using the Roche LightCycler 480 Software (Roche Molecular Systems, Inc.). Primer efficiency was calculated from the standard curves. All samples were run in duplicate and mean values calculated. The standard curves were used for absolute quantification of the DNA samples to determine copy number. CPV16 copy number was normalized to copies of the reference gene RPL13A. Relative CPV16 E7 mRNA expression was determine by first normalizing expression to the reference gene GAPDH and calibrating all samples to mean expression in the plaque sample (2-ΔΔ Cq method) [32].
CPV genotyping
Genomic DNA was extracted from two formalin fixed paraffin blocks containing tissue from multiple pigmented viral plaques (labeled Plaques A and Plaques B), one block of lymph node with metastatic squamous cell carcinoma (labeled LN SCC), and one block of squamous cell carcinoma (labeled skinSCC). PCR was performed on the extracted DNA using degenerate consensus primers (CanPV/FAP64) known to amplify multiple canine papillomavirus types (Supplemental Figure 1D) [28]. Sequence analysis of the amplicon from Plaques A revealed an approximately 300 bp sequence that shared approximately 99% sequence nucleotide identity to CPV12. Sequence analysis of the amplicons from Plaques B, LN SCC, and skinSCC revealed a 315 bp sequence that shared less than 80% nucleotide identify to any known papillomavirus type. In an attempt to sequence the entire viral genome, both rolling circle amplification and inverse PCR using genomic DNA extracted from the fresh frozen sample of squamous cell carcinoma were performed but failed (data not shown). This suggested the possibility that the fresh sample of SCC did not contain the entire viral genome and conventional methods to sequence the viral genome would be unsuccessful. We therefore attempted to identify the full viral genome using tissue from the original biopsy of the viral plaques (Plaques B), which were more likely to contain the entire viral genome. These samples, however, were all formalin fixed and paraffin embedded, which limited our ability to amplify large fragments. We therefore attempted to identify the full viral genome by generating a series of overlapping short (< 500 bp) amplicons from the FFPE pigmented viral plaque sample. One of the consensus primer sets (CPV Set #2) generated a novel amplicon, which was 494 bp in length and shared less than 80% nucleotide identify to any known papillomavirus type.
Whole genome sequencing
High throughput sequencing (HTS) was then performed using extracted DNA from the fresh frozen sample of squamous cell carcinoma. 227 M 100 bp paired-end reads were generated. After trimming, 208M paired-end trimmed reads were aligned to the Canis familiaris and phiX genomes. 1.6 M paired-end reads that did not align to these genomes were considered candidate viral sequences and were used in the assembly process.
The two initial nucleotide fragments generated with the CanPV/ FAP64 and CPV Set #2 primer pairs were used to seed PRICE assembler and a 6454 bp fragment of the viral genome was sequenced. To obtain the remaining sequence, we performed PCR using a combination of specific primers and consensus primers on the sample of viral plaques (Plaques B). After this round of PCR, a single fragment was identified using the ConSet#2 primers. Another round of PCR was then performed with specific primers designed within the new ConSet#2 fragment and specific primers designed within the larger fragment from HTS. This final round of PCR generated the final sequences that enabled assembly of the full viral genome. The sequencing methodology is outlined in Fig. 1.
Viral gene translocations and deletions
High throughput sequencing identified only a portion of the viral genome, extending from 3295 to 1784 of CPV16, with no viral sequences identified between 1785 and 3294 (Fig. 4). There was a very uneven depth of coverage and no reads to indicate a circular genome (Fig. 4). Additionally, there was an 83 bp translocation from CPV16 LCR (7479-7562) to the terminal end of the HTS assembly located in the CPV16 E2/E4 genes (3295). At the opposite terminal end of the HTS assembly located in the CPV16 E1 gene (1278), there was an 85 bp translocation of the CPV16 E1 gene (1278-1363). Thus, the HTS assembly included the following nucleotide arrangement based upon the full CPV16 genome: 7479-7562:3295-(7796/1)-1784:1278-1363 (Fig. 4).
Fig. 1.
Schematic representation of the sequencing methodology to generate full-length canine papillomavirus (CPV) 16 genomic sequence. Genomic DNA was extracted from a sample of squamous cell carcinoma and two segments of viral genome were identified using degenerate papillomavirus primers followed by amplicon sequencing (fragments indicated in red). Illumina Hiseq was then performed and all read pairs that aligned to dog were removed. PRICE assembler was then seeded with the two initial viral genome fragments (red), which enabled identification of a 6454 base pair sequence of the viral genome (indicated in blue). The remaining viral genome was identified using overlapping amplicons generated with combinations of specific and degenerate primers on DNA extracted from the original viral plaque (green). To verify that a portion of the CPV16 viral genome was deleted in the SCC sample, we performed conventional PCR for a region of the E1 gene, a region spanning E1 and E2, and a region spanning the LCR and E6. The sample of viral plaques revealed amplicons present for all 3 gene segments, whereas the sample of SCC contained an amplicon only using primers that amplified the LCR/E6 gene. No amplicons were generated with the E1 or E1/E2 primer sets (Fig. 5A).
During sequencing, one primer set (Chana Set #14) that amplified a portion of the E2/E4 region of CPV16 revealed two different sized products in the plaque sample (no amplicons were generated in the SCC sample) (Fig. 5B). Sequencing of the larger amplicon identified the expected 899 bp segment spanning 2706-3605 of the CPV16 genome. Sequencing of the smaller amplicon revealed a sequence spanning from 2706 to 3605, but with a deletion of 559 bps that extended from 2805 to 3364 (Fig. 5C).
Chromosomal integration
We pursued genome walking in order to identify a potential site of integration into the host genome. Using forward primers based in the CPV16 genome, we generated a total of eight unique sequences (Table 1 and Fig. 6) by sequencing different amplicons resulting from genome walking. Two sequences spanned the CPV16 genome between 871 and 1592 and 871-1544, a segment of the E1 gene. Three other sequences identified a translocated segment of the E1 gene located at 1784 of the CPV16 sequence. Another sequence contained a portion of the E1 gene, a translocated segment of the E1 gene, followed by a sequence of canine chromosome 7. The viral breakpoint was located at 1784 bp in the CPV16 genome. One sequence contained a portion of the CPV16 E1 gene and a segment of chromosome 3 with a viral breakpoint at CPV16 1457. One final sequence contained a portion of the CPV16 E1 gene and a segment of chromosome 16, with a viral breakpoint at CPV16 1449.
Using reverse primers based in the CPV16 genome, we generated 4 unique sequences (Table 1 and Fig. 6) based upon amplicons generated from genome walking. One sequence spanned the CPV16 genome between 5218 and 5673, a segment of the L2/L1 gene. Two other unique sequences included a portion of E2/E4 gene and a translocated segment of the L1/LCR. The final sequence contained a portion of the L2 gene and a segment of chromosome 18.
The sites of integration were located upstream of the gene Piezo Type Mechanosensitive Ion Channel Component 2 (PIEZO2) and within introns of the genes Cytoplasmic Polyadenylation Element Binding Protein 1 (CPEB1), WD Repeat Domain 86 (WDR86), and Vacuolar Protein Sorting 41 (VSP41). CPEB1 is the only gene likely to play a role in cellular proliferation, but given the site of integration into an intron, it is unlikely that integration resulted in upregulation of this gene.
mRNA expression of viral gene E7
High risk HPV integration is usually considered a necessary event in the progression of cervical cancer. HPV integration results in an increased expression of the E6 and E7 viral oncogenes responsible for cell transformation. In order to determine whether the expression level of E6 and E7 were also increased in the CPV-integrated SCC lesions, quantitative PCR assays were designed specifically for CPV16 E7. The expression level of CPV16 E7 was increased by 4 folds or 13 folds within the SCC sample or metastatic SCC sample, respectively, when compared to expression within the viral plaque. It was possible that the increased level of mRNA was due to the increased copy number of the viral DNA instead of the increased activity of transcription. We measured the viral DNA copies in those lesions. Interestingly, CPV16 copy number was higher within the viral plaque sample (average 2446 copies per copy of reference gene) than either the SCC (average 68 copies per copy of reference gene) or the metastatic SCC sample (average 35 copies per copy of reference gene) ( Table 2). Therefore, the level of E7 mRNA from each viral DNA was more than 150 fold higher in the SCC lesions with integrated CPV16 comparing to viral plaque with episomal ones.
Discussion
Viral integration and loss of viral episomes occurs routinely for the high-risk mucosal papillomavirus types present within cervical carcinomas [4]. This is not the case for human cutaneous papillomaviruses, for which there are only rare reports of viral integration and associated skin cancers are uncommon [6]. Canine papillomaviruses are similarly only rarely associated with cancer, and viral integration has never been documented for a canine papillomavirus. We have herein described an unconventional method used to sequence canine papillomavirus 16 from a canine viral plaque that progressed to squamous cell carcinoma, which led to identification of viral deletions, translocations, and four sites of viral integration.
Canine papillomavirus 16 has an organization typical of papillomaviruses, including six early genes and two late genes that encode the viral capsid. CPV16 E6 includes the typical two pairs of zinc binding motifs, which are involved in protein-protein interactions [35,40]. E7 from CPV16 contains the LXCXE peptide-binding motif, which is found in high risk HPV16, as well as other CPVs and cutaneous HPVs [41]. This motif is responsible for binding to the tumor suppressor gene retinoblastoma and inhibiting its function [37,38].
Canine papillomavirus 16 is most closely related to other canine chipapillomaviruses, with the most closely related viruses including CPV4, CPV12, and CPV9. Like all other chipapillomaviruses [22], CPV16 was identified in association with viral plaques present on the skin of a dog. However, unlike most canine viral plaques which either regress spontaneously or remain as benign lesions, the viral plaque in this dog progressed to squamous cell carcinoma and metastasized to the regional lymph nodes. There are rare reports of viral plaques progressing to squamous cell carcinoma, but even more rarely has the associated virus been identified [24,25]. Given the uncommon occurrence of a viral plaque progressing to cancer and the even more rare identification of the associated virus, it remains unknown if there are "highrisk" canine chipapillomavirus types that have a greater risk of cancer. So far, of the chipapillomaviruses, only CPV9, CPV12, and CPV16 have been associated with cancer, and each of these reported occurrences represents only a single dog [24,25]. Canine papillomavirus types from other genera are also rarely associated with cancer. CPV2, a taupapillomavirus, causes cutaneous papillomas that can progress to metastatic squamous cell carcinoma in a research colony of dogs with Xlinked severe combined immunodeficiency [23]. CPV17, another taupapillomavirus, was the likely cause of multiple oral SCCs in a single dog and CPV1, a lambdapapillomavirus, most often causes oral papillomas but has rarely been associated with cancer [26,27,42,43]. Of all these cases of SCC, there have been no reports of viral integration into the host genome.
We demonstrated deletion of portions of the E1 and E2 genes within the SCC samples with multiple viral translocations and host integration sites. One viral translocation included a segment of the E1 gene translocated into a more distant segment of the E1 gene and a second translocation where a segment of the LCR translocated into the E2/E4 genes. We also identified 4 sites of viral integration, including 3 within the E1 gene and one within the L2 gene. As there were three viral breakpoints identified within the E1 gene, this may be a site of genetic instability within the virus. In one case, integration into chromosome 7 occurred immediately adjacent to a translocated section of E1. The other two integrations within E1 were located slightly earlier in the E1 gene; however, as these are small sequences, it is possible that these sites are located within a translocated section of E1, similar to the chromosome 7 integration site. Regardless if it was the main E1 gene or a translocated segment of E1, a portion of the E1 gene integrated into 3 different chromosomes with at least one occurring at a site of a translocation. We also identified a deletion within the viral genome in the pigmented viral plaque, suggesting that even in early lesions without overt criteria of malignancy there can be viral genomic instability.
One limitation to our study was the low and somewhat uneven coverage of the viral genome obtained with HTS. The low coverage was not entirely unexpected as the samples were from total extracted genomic DNA and most sequences aligned to the dog. It is possible that more of the genome would have been detected had we been able to get deeper coverage. We did, however, confirm loss of at least a portion of the E1 and E2 genes within the SCC using conventional PCR and have further amplified the identical translocations identified with HTS using a separate method, genome walking. Thus, while HTS may have missed portions of the viral genome, our overall findings of loss of portions of E1 and E2 and multiple translocations have been supported using other molecular methods.
The majority of human cervical cancer cases contain integrated high-risk papillomavirus, and this is widely accepted as an underlying mechanism for carcinogenesis [3,5,44,45]. The most supported hypothesis is that integration causes disruption of the E2 gene and subsequent loss of E2-mediated transcriptional repression of the E6 and E7 promoter. This loss of repression can thereby result in increased expression of E6 and E7 and ultimately unregulated cell growth [3,5,[44][45][46]. E2 regulation can also be disrupted by methylation, with similar loss of repression of E6 and E7 expression [46]. Alternatively, integration with disruption of E1 expression could result in DNA damage and focal genomic instability [46]. In our present case, disruption of the E2 gene, and possibly E1, with loss of repression of E6 and E7 expression is the most plausible explanation in this present case. In support of this hypothesis, our results of HTS and genome walking identified sites of integration that would disrupt expression of E1 and E2. Further, E7 mRNA expression was higher within the SCC samples containing integrated forms compared with the viral plaque samples; and, this was not due simply to an increase in viral copies, as there were higher viral copies identified within the viral plaque.
In some cases of cancer, however, the HPV remains as an episome or integration occurs at a site other than E1/E2, and thus disruption and overexpression of E6 and E7 could not explain carcinogenesis in these cases [45]. Other hypotheses include viral integration within a tumor suppressor gene causing gene inactivation, viral integration flanking an oncogene resulting in overexpression, and viral integration resulting in widespread genomic instability [45,46]. None of these latter hypotheses would explain development of cancer in this present case, as integration did not occur within a tumor suppressor gene nor did it flank any oncogenes. It remains possible, however, that identification of additional integration sites may in fact support some of these alternate hypotheses. One additional hypothesis that we cannot rule out is the formation of viral-host fusion transcripts that are more stable than viral transcripts [46].
Human cutaneous papillomaviruses have been a controversial player in development of non-melanoma skin cancers in humans, but current data strongly supports a role for cutaneous papillomaviruses in the pathogenesis of non-melanoma skin cancers, particularly in immunosuppressed individuals [47][48][49]. These human papillomaviruses, unlike their mucosal high-risk counterparts, do not integrate into the host genome and instead remain as extrachromasomal DNA [49]. In the African multimammate mouse (Mastomys coucha), a preclinical model to study the effect of papillomavirus on skin carcinogenesis, natural infection with MnPV induces benign skin lesions which can transform to cancer, but the virus does not integrate into the host genome. In contrast to these other cutaneous papillomaviruses, CPV16 in this present study integrated into the host genome which likely played a role in carcinogenesis. It may be that CPV16 has a somewhat unusual behavior for cutaneous papillomaviruses and is more typical of the high-risk mucosal human papillomavirus types.
It remains to be seen if CPV16 will be an important player in development of cancer in dogs. It is likely that CPV16 was essential for carcinogenesis in this case; however, establishment of a causal association will require further demonstration of oncogene transcription within these tumors and demonstration of transforming abilities of the oncogenes. Nonetheless, this manuscript has demonstrated for the first time the ability of a cutaneous canine papillomavirus to integrate into the host genome, similar to the high-risk human papillomaviruses within cervical cancers.
Conclusions
Canine papillomavirus 16 was identified from a pigmented viral plaque that progressed to metastatic squamous cell carcinoma. Segments of the viral genome were deleted in the squamous cell carcinoma, including segments of the E1 and E2/E4 genes. Multiple translocations of viral genomic segments were detected within the squamous cell carcinoma sample, along with integration into four Table 2 mRNA expression of CPV16 E7 and viral copy number within tissue samples of viral plaque and squamous cell carcinoma (SCC). CPV16 DNA copy number is expressed as copies of CPV16 DNA per copy of the reference gene RPL13A. Gene expression of CPV16 E7 is reported as a relative quantity where all samples are normalized to reference gene expression and then compared to the mean expression within the plaque sample, which is expressed as 1, to determine the fold change. chromosomes. This is the first report of chromosomal integration for a canine papillomavirus. | 2019-03-11T17:16:48.123Z | 2019-02-13T00:00:00.000 | {
"year": 2019,
"sha1": "eb139d090477ebac2a1d2626e9939b245db73a46",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pvr.2019.02.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb139d090477ebac2a1d2626e9939b245db73a46",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
81145936 | pes2o/s2orc | v3-fos-license | The risk of suicide in chronic pain patients
Chronic pain patients are at elevated risk for suicide, but suicidality as an escape from persistent suffering is not the full picture. Chronic pain is comorbid with mental health conditions, in particular depression and personality disorders, which carries with it elevated risk for suicidality. The neural networks involved in chronic pain processing overlap those with depression. Affective disorders such as catastrophizing may not only exacerbate painful symptoms but also increase the risk of suicidal ideation independent of pain intensity and depression. Insomnia, likewise common in the chronic pain population, has also emerged as a risk factor for suicide. The use of certain drugs has also been linked to increased suicidality. Thus, an interplay of factors may enhance suicide risk in the chronic pain population. Migraine is more associated with suicidality than other disease conditions, while neuropathic pain seems to confer no additional risk of suicide. The role of cytokine imbalances and inflammatory processes is being explored as a risk factor for suicide in patients with chronic pain and/or mental health disorders. Substance abuse may also play a role in increasing the risk of suicide in chronic pain patients. The chronic pain population is a particularly vulnerable population with respect to suicidality and clinicians should be mindful of these risks in treating chronic pain patients. *Correspondence to: John Bisney, NEMA Research, Inc., 868 106th Ave North, Naples, FL 34108, USA, Tel: 979864 4479, E-mail: john@leqmedical.com
Introduction
Suicide is a global public health problem and may represent about 1.4% of all deaths and is the 17 th leading cause of death in the world [1]. For every suicide victim, there are as many as 20 others who attempt suicide [1]. Suicide was the 10 th most frequent cause of death in the United States (US) in 2014 (accounting for 42,826 deaths) and appears to be trending upward [2]; the age-adjusted suicide rate in 2014 was 24% higher than it was in 1999 [3]. Of the top 15 causes of death in the US, suicide has the highest male to female ratio (3.6) and the lowest black to white ratio (0.4) [2]. In 2014, about 1.1 million American adults attempted suicide and 9.4 million reported having serious thoughts about committing suicide (without making an attempt) [4].
Suicidality is a complex phenomenon and may occur when the individual simultaneously faces both extreme stress and a limited ability to cope. Indeed, the individual may perceive suicide as the only means of escaping physical and/or emotional pain [5]. Chronic pain patients are at double the risk of suicide compared to others not in chronic pain, with a lifetime prevalence of attempted suicide ranging from 5% to 14% [6]. Chronic pain may create life-altering, overwhelming stress, and limited access to medical care, social support, and/or effective analgesia can seem overwhelming.
In a survey of 1,512 chronic pain patients, 32% of respondents reported they had some form of suicidal ideation [7]. There are likely multiple reasons for this. Chronic pain is a devastating psychosocial life event that can reduce function, limit the patient's ability to pursue everyday activities, and cause unrelenting suffering. Many chronic pain patients suffer from comorbid psychiatric disorders; this condition is so common that is has been termed "dual diagnosis. " In addition, chronic pain patients are often prescribed polypharmacy, which may alter biochemistry in unintended ways and, in so doing, exacerbate distress and suicidal ideation. The aim of this narrative review is to explore what is currently understood about suicidal behaviours in chronic pain patients.
Methods
The PubMed databases were searched for the keyword combination "chronic pain + suicide" and articles obtained were reviewed. In some cases, references from those articles were further explored. Some of our references, in particular for background statistics, come from websites maintained by the Centres for Disease Control and Prevention and the World Health Organization. This is a narrative review rather than a systematic review.
Suicide Terminology
Suicidality must be considered as a continuum. At one end are random ideas and vague thoughts of self-harm and at the other end is completed suicide; between these two extremes are thoughts, plans, and attempts, in that order [8]. Suicidal ideation-entertaining thoughts of suicide-is considered an important precursor to attempted suicide or suicide [9,10]. The most robust predictor of suicide is a prior attempted suicide [11,12]. (Table 1).
Suicide risk may be defined along two dimensions: the baseline risk (an underlying ideation that remains relatively constant over time) and acute risk (risk that fluctuates in response to stressors and situations) [13]. Thus, suicide risk is dynamic over a lifetime.
The risk of suicide in the population of chronic pain patients
Chronic pain has been associated with higher rates of suicidal ideation, suicide attempts, and completed suicides [6]. The prevalence of suicidal ideation in chronic pain patients is about three times as great as among those who do not suffer from chronic pain [14,15]. Explanations for suicidality in chronic pain patients may first include the obvious, namely that individuals with severe, relentless pain and concomitant dysfunction might feel hopeless and consider suicide a way to end their suffering [6]. It has been speculated that certain chronic painful conditions may activate indirect mechanisms such that a mental health disorder is triggered, which then leads to suicidal ideation or suicide attempts [14]. Serotonergic abnormalities have been suggested as playing a role in suicidal ideation and attempted suicide [16].
The risk factors for suicide in general may be grouped into five domains: psychiatric disorders; risks associated with specific personality traits; psychosocial life events, including chronic illness; genetic and familial factors; and neurochemical and biochemical influences [17,18]. Predisposing factors include life events, opportunity, and environmental factors; potentially modifying factors include social support, psychiatric care, and cognitive flexibility [17]. Physical problems, including pain and loss of function, have been specifically observed as a precipitating factor in suicides of geriatric individuals [19].
A number of risk factors for suicidality in chronic pain patients have recently been identified [6]. (Table 2). Pain itself is a risk factor, although some studies have found equivocal results in terms of whether worsening pain intensities could be associated with exacerbated suicide risks in a dose-dependent fashion [20,21]. However, very severe pain confers a much greater risk for suicide than mild pain [20,21]. A retrospective study based on the 1999 Large Health Survey of Veterans (n=260,254) found that completed suicide could be associated with extreme pain intensity levels [22]. After controlling for psychiatric conditions and demographic data, the suicide rates for those with very mild pain compared to those with very severe pain was 45.27 versus 80.65, respectively, per 100,000 patient-years [22]. A meta-analysis of 31 studies found the risk of suicide among those with physical pain was higher than in those without pain [23]. People dealing with physical pain compared to those without pain were significantly more likely to report having a death wish (p=0.0005), current or lifetime suicidal ideation (both p<0.00001), having made plans for suicide (current p=0.0008, lifetime p<0.00001), attempted suicide (current p<0.0001, lifetime p<0.00001), and completed suicide (p=0.02) [23]. A retrospective study from the National Comorbidity Survey Replication (n=5,692) retrieved data on painful conditions, non-painful conditions, and suicidal history [24]. In unadjusted logistic regression analyses, the presence of any pain condition was associated with both 12-month and lifetime suicidal ideation, suicide planning, and attempted suicide. Even when controlling for demographic, medical, and mental health covariates, the presence of a painful condition was significantly associated with lifetime suicidal ideation (odds ratio 1.4, 95% CI, 1.1 to 1.8) [24]. Although not well studied, it may be that the duration of pain increases the risk of suicide [25]. Specific painful conditions may confer a greater or lesser risk of suicidality than others [14,15,20,26,27].
An emerging risk factor for suicide in chronic pain patients is sleeponset insomnia with daytime dysfunction combined with high pain intensity [20]. Since sleep may be a respite or "escape" for chronic pain patients, insomnia in this population may be perceived as particularly distressing.
It has been suggested that chronic pain as a persistent condition may reduce the patient's fear of death [29]. Thus, chronic pain is a source of distress, may reduce quality of life, can be comorbid with challenging mental health conditions, and, at the same time, serve to reduce the patient's natural fearfulness about death and dying.
Dual diagnosis: Complicating the picture of suicidality among chronic pain patients
Chronic pain is associated with comorbid conditions, such as depression, which may confer their own elevated risk of suicidality [30,31]. Among the general population, it has been estimated that about 90% of individuals who commit suicide have at least one psychiatric disorder at the time of death [32]. The overlap between mental health disorders and chronic pain is so prevalent that patients with both chronic pain and comorbid mental health disorders have a "dual diagnosis. " In a study of two geographic regions using data from the Veterans Health Administration (VHA), 432 suicides over a seven-year period were evaluated, of which 381 had detailed chart information available [33]. In the latter group, 68.5% had documented mental health conditions compared to 31.5% who did not. Those with documented psychiatric conditions were also more likely to have reported pain, sleep problems, and suicidal ideation [33].
Chronic pain has been associated with major depressive disorder (MDD), although its association with certain other mental health conditions remains less clear [34,35]. Depressed patients have more complaints about pain and great impairment from pain than nondepressed patients [35]. Conversely, patients with more severe pain are more likely to have depressive symptoms [36]. When chronic pain is moderate to severe in intensity or when it reduces the patient's function or remains refractory to treatment, the patient generally exhibits more severe depressive symptoms and worse outcomes [35]. Even when the depression can be effectively treated, those patients in pain are more likely to relapse than those not dealing with pain [37]. Patients with even residual depressive symptoms are more likely to attempt suicide than those without depression [38]. From epidemiological evaluations of chronic pain patients treated in pain clinics, the prevalence of comorbid depression was 52% and the mean prevalence of pain in depressed patients was 65% [35]. A primary care study reported that 69% of patients with MDD had at least moderate levels of pain, compared to about 39% who had moderate levels of pain but no MDD [35].
This is your brain in pain and depression
Many of the key areas of the brain affected by MDD and other mental health conditions have also been implicated in pain. For a person in pain, pain relief is perceived as both rewarding and pleasurable and is so encoded in the brain's reward circuits [40]. Structural and functional brain remodelling has been observed in some chronic pain patients. Chronic pain patients may also experience a dysregulation of the brain's The deeply held belief of a chronic pain patient that nothing can be done to control his or her pain and nothing can change his or her current situation
Hopelessness
The deeply held belief of a chronic pain patient that a positive or desirable result is impossible and will never be achieved Learned helplessness A psychological condition in which the chronic pain patients comes to believe that his or her pain is so far beyond control that no attempts are made to change that condition Desire to escape pain A natural inclination to not suffer pain or do things that mitigate the pain Catastrophizing An extreme belief that the pain or other related conditions will lead ultimately to an unavoidable disaster of enormous proportions
Escape and avoidance
A commonly reported motivation for suicidal behaviors is the urgent desire to escape an unbearable situation. 28 Chronic pain patients "trapped" by their pain may experience an urgent desire to escape, but since escape is impossible, the burden of chronic pain may seem unbearable Poor problem-solving skills Chronic pain patients face numerous, dynamic, and sometimes formidable problems and the inability to solve them or even address them effectively may contribute to both frustration and a sense of helplessness
Mental health Mental health conditions
Numerous mental health conditions, such as major depressive disorder, are prevalent among chronic pain patients and some may confer a risk of suicidality Pharmacology Certain drugs may exacerbate suicidality and/or cause depressive symptoms Some drugs are associated with suicidal thoughts and actions Long-term opioid therapy may be associated with hypogonadism, depressed libido, and sexual dysfunction which may lead to depressive symptoms Table 2. Risk factors for suicidality, including suicidal ideation, in chronic pain patients reward circuits, which has been theorized as a reason for their reduced pain threshold over time [40]. Thus, it may be important to understand the commonalities in brain functions associated with both pain and MDD.
Nociception-afferent neural activation that transmits sensory information about noxious stimuli-differs from pain, which is the conscious experience [41,42]. Pain has an emotional and even intellectual context which is influenced by the patient's mindset, attitudes, mental health, cultural background, religious beliefs, and other factors. Advanced neuroimaging technology has provided vast insights into the brain and illuminated the cortical basis of pain perception. Indeed, in 1989, the presence of a pain-specific cortical matrix, originally named the "neuromatrix" was hypothesized [43]. Over time, the neuromatrix concept was refined into the "pain matrix, " which defines a dedicated and specific pain-processing network within the brain [44][45][46][47]. It has been suggested that pain is experienced by the flow of information along the entire pain matrix rather than the specific isolated activation of a particular brain region [48]. The brain network associated with perception of acute pain in healthy subjects is distinct (at least partially) from the brain networks involved with chronic pain [49]. The neural network associated with chronic painful conditions engages some areas of the brain that are also associated with cognitive processes and emotional responses, which provides a physiological basis for the clinical observation that chronic pain often appears to have an emotional and contextual component absent in acute pain [49]. This shared neurobiology may explain why cognitive behavioural interventions can be effective in chronic pain patients ( Figure 1).
Amygdala
This small area in the interior of the brain processes emotional responses and olfactory perceptions. It has been theorized that the amygdala attaches an emotional context to a pain experience [50]. Scanning studies suggest that MDD patients experience increased blood flow to the amygdala or changes in amygdala morphology [51- Figure 1. Key landmarks in the brain related to the pain pathways and mental health. The inset shows the insular cortex within the brain 54]. Suicidal MDD patients are more likely to have a larger amygdala volume than non-suicidal MDD patients [55]. Amygdala activity increases during stress, which might be precipitated during chronic pain [56]. The amygdala has been described as the crucial link between chronic pain and depression [57].
Anterior cingulate cortex (ACC)
This region supports cognitive decision-making, anticipation of rewards, empathy, and emotional response [58]. The ACC also helps process pain signals and may influence the patient's motor response to pain. Endogenous opioid activity in the ACC can relieve pain [59].
Cerebellum
Depressed patients typically exhibit a hypoactive cerebellum with reduced vermal volume [58,60,61]. The cerebellum may be activated when a patient experiences pain, a noxious stimulus, or in situations where there is empathy for another's pain [62].
Gray matter
The majority of neuronal cell bodies of the brain are found in the gray matter. Patients with chronic pain and MDD have reduced gray matter density, although the patterns of this loss and their potential commonalities have not yet been elucidated [63,64].
Hippocampus
The hippocampus helps to form and store episodic memories [58]. Its volume is reduced in depressed patients. 54,65,66 During painful experiences, the hippocampus may become activated [67]. Hippocampal dysfunction may result in an inappropriate response to pain [68].
Insular cortex
The insular cortex aids in the complex processing of multiple streams of information, such that an emotionally relevant context can be derived for a particular sensory experience [58]. The insular cortex may be activated when the patient perceives pain, and it may help the patient contextualize the pain [62].
Nucleus accumbens
Pain specialists are familiar with the role of the nucleus accumbens in addiction; it is part of the brain's reward and pleasure pathway and may also help to regulate emotional responses. 58 The nucleus accumbens may increase negative responses, including the fear of pain [69].
Prefrontal cortex (PFC)
The prefrontal cortex regulates executive function, which includes the ability to plan, working memory, decision-making abilities, and deferred gratifications [58]. When a patient experiences pain, it activates the PFC which helps to process the affective perception of pain [62].
Somatosensory cortex
The somatosensory cortex is active in tactile sensory memories, and it is one of the main areas of the brain to help identify noxious stimuli, process the sensory perception of pain and pain intensity, and describe and differentiate pain [62,70].
Thalamus
The role of the thalamus-the volume of which is reduced in depressed patients-is to relay sensory information from the body to the cerebral cortex [58]. The thalamus may play a role in both affective and sensory processing of pain signals [62].
Sensory and affective aspects of pain and mental health disorders
In the clinical setting, pain is almost exclusively measured in terms of its intensity; yet the sensory aspects of pain (its location, characteristics, waxing and waning, relieving or exacerbating mechanisms, and so on) often emerge as the more important symptoms in the treatment of chronic pain patients. In an observational clinical trial in Italy of 627 consecutive chronic pain patients admitted to psychosomatic medical counselling, patients were evaluated using the Italian version of the McGill Pain Questionnaire and subjected to the cold pressor test to evaluate their pain threshold; pain tolerance was defined as the time elapsed between immersion of a limb into cold water and the moment the limb was withdrawn [71]. Of the patient population, 381/627 (61%) were diagnosed with some form of mood spectrum disorder based on the Mini-International Neuropsychiatric Interview (MINI) and the Hospital Anxiety Depression Scale (HADS) [72][73]. Patients with mood spectrum disorder demonstrated lower pain thresholds to the cold pressor test and increases in all dimensions of clinical pain versus control patients (those without mood spectrum disorders) although the difference between groups was not significant. Demographic data from the patient population indicate that the mood-spectrum disorder patients had a higher prevalence of current suicidal thoughts, whereas just under 2% of the control patients reported suicidal ideation [71].
While pain intensity is typically measured in clinical situations, pain also carries with it affective aspects, such as the way the patient thinks and feels about the pain. Affective response to pain actually relies on a different pathway in the central nervous system (CNS) than sensory perception, sharing the same neural pathway as depression [74]. Thus, negative affective response may help link pain to depression.
Catastrophizing
Catastrophizing among chronic pain patients may be defined as the belief that the pain and related dysfunction will unavoidably lead to enormous disaster. In some chronic pain patients, this can trigger a downward spiral that begins with catastrophizing, progresses to negative emotions, and culminates in suicidal ideation [6]. For example, a catastrophizing chronic pain patient might be a relatively functional individual with chronic pain who entertains thoughts of losing his job, then losing his house, and finally becoming completely dependent on hostile family members who do not wish to care for him. Catastrophizing may be associated with neuroses and a heightened negative affect [75,76]. Catastrophizing has been associated with increased pain severity, exacerbated muscle or joint tenderness, and pain-related disability. Catastrophizing may amplify pain signals processed by the central nervous system [77][78][79][80][81][82][83][84][85].
Catastrophizing has been associated with increased consumption of analgesics and intensified negative emotions [84]. Furthermore, the tendency to catastrophize has been associated with insomnia, another risk factor for suicidal ideation [86]. In a study of 360 rheumatology patients in South Korea, pain catastrophizing had a significant association with increased suicide risk [87]. In this study, suicidal risk was particularly heightened in patients who perceived that they would become burdens to their families. Similarly, a US study of 303 chronic pain patients found that both perceiving oneself to be a burden and distressed personal relationships were both significant predictors of suicidal ideation [88].
Problem-solving deficits
In very simplistic terms, suicide can sometimes be related to poor problem-solving skills [89]. Since the problems of chronic pain patients can be numerous, formidable, and dynamic, it seems reasonable to suggest that problem-solving deficits in chronic pain patients would contribute to their frustration and a sense of helplessness [90]. Likewise, depressed patients may exhibit impaired problem-solving abilities; poor problem-solving skills in depressed individuals may contribute to poor health, in that they may not always exercise good judgment about their lifestyle choices or medications [91].
The role of insomnia in suicide among chronic pain patients
A commonly reported motivation for suicidal behaviours is the urgent desire to escape an unbearable situation [28]. For some chronic pain patients, their only relief from their persistent torment is sleep, which may explain why insomnia has emerged as a potent risk factor for suicide in chronic pain patients [6]. Frequent sleep-onset insomnia (but not occasional insomnia or middle-of-the-night insomnia) has been linked to a five-fold increased risk of suicide in both the general and chronic pain populations [92]. In studies of depressed patients, poor-quality sleep, insomnia in general, and hypersomnia have been associated with an elevated risk for suicidality [93][94][95]. In the general population, even insufficient amounts of sleep may be linked to suicide; individuals who slept <4 hours per night had a three-fold increased risk of suicide compared to those who slept 6-8 hours per night, but even the 6-8-hour subjects were still 1.5-times as likely to commit suicide compared to those who slept more than 8 hours per night [96]. A 19year study of 16,989 healthy subjects in France found that men with ≥3 problems sleeping (the need for sleeping pills, not being able to fall asleep, taking too long to fall asleep, sleeping poorly, or waking too early) had a nearly five-fold increased risk of suicide [97]. (This study included both men and women but analysed data for men only since they comprised 74% of the study population.) Sleep disorders are closely associated with depression, anxiety, and other mental health conditions, and it has been estimated that up to 60% to 80% of depressed patients experience insomnia [98,99]. Chronic pain patients likewise have very high rates of insomnia, ranging from 50% to 96% [100][101][102][103]. Insomnia has been recognized as a risk factor for suicide, specifically in chronic pain patients, indeed, sleep-onset insomnia may actually be a more robust predictor of suicidal ideation than depression among chronic pain patients [6,20].
In a study of 88 chronic pain patients (66% female), those reporting suicidal ideation were compared to those who did not report such thoughts [104]. Patients were assessed in five domains: sociodemographic status, physical health, psychological well-being (including whether they suffered depression), cognitive abilities (including catastrophizing), and the use of psychotropic and/or illicit drugs. Controlling for all physical health measures (pain intensity, pain duration, disability), the only significant predictor of suicidal ideation was poor sleep quality (odds ratio 1.29, 95% confidence interval [CI], range 1.09 to 1.53) [104].
Suicidality and pharmacological therapy
Chronic pain patients frequently require polypharmacy, not only to control pain, but sometimes also to address comorbid conditions. Among the most frequently prescribed drugs in the chronic pain population are antidepressants (which have an analgesic effect distinct from their anti-depressive actions), anticonvulsants (for neuropathic pain), and opioid pain relievers.
Antidepressants
In a retrospective database study from the Department of Veterans Affairs database, 502,179 patients were identified who had diagnosed depression and a prescription for antidepressants in the time period from 1999 to 2004 [105]. Of this patient population, 47% had at least one outpatient mental health visit in the prior year, 13% had diagnosed posttraumatic stress syndrome, and 12% had diagnosed alcohol use disorder. Crude suicide rates varied by drug, ranging from 88 per 100000 patient-years to 247 per 100,000 person-years. The highest rates occurred with patients who just started mirtazapine, followed by venlafaxine, paroxetine, citalopram, sertraline, fluoxetine, and bupropion [105].
Antiepileptic drugs
Although the FDA issued an alert in January 2008 warning about the risk of suicide for many antiepileptic drugs, subsequent retrospective database analyses and reports have produced mixed evidence [106][107][108][109][110]. A case-control study from France matched patients with incident suicide attempts to demographically similar controls (506 patients who attempted suicide versus 2,829 controls) but found no statistically significant association between an attempted suicide and the use of antiepileptic medications (odds ration [OR] 1.5, 95% CI, range 0.9-2.4) [111]. However, not all of these patients suffered from chronic pain, although some did. A meta-analysis (cohort of 5,130,795 patients who received antiepileptic drugs for any reason) found no association between suicide and the use of these drugs [108]. A retrospective database study of 47,918 bipolar patients treated with antiepileptic medications found that these drugs did not increase suicidality compared to bipolar patients who did not take antiepileptic drugs and also compared to those who took lithium [112].
Opioids
Chronic pain patients who misuse or abuse opioids differ from those who take their medication only as prescribed in terms of their attentional and autonomic responses (reduced parasympathetic nervous activity) to opioid cues, that is, things like the sight of the pill, holding the prescription bottle, observing a doctor write a prescription, and so on [113][114]. It is not clear whether cue reactivity differs in chronic pain patients with and without suicidal ideation. However, the possible association among suicidality, opioid cravings, and cue reactivity may help better define the chronic pain patients at increased risk for suicide [115]. In a study of 115 chronic pain patients on longterm opioid therapy, those patients who sometimes tried to selfmedicate their suicidal feelings by abusing opioids were more likely to experience enhanced cravings and heightened opioid-associated cue reactivity [116]. Thus, exaggerated opioid cue-reactivity may suggest an attempt to self-medicate suicidal inclinations.
A retrospective data analysis of suicide mortality and opioid dose (n=123,946 patients from a VHA database) found that higher doses of prescribed opioid analgesics could be correlated with increased risk of suicide, after controlling for demographic and clinical characteristics [117] (Table 3). Similar associations between analgesic dose and suicidality did not occur with acetaminophen, suggesting these findings are specific to opioid analgesics rather than other pain relievers. The reasons for this are unclear. It may be that higher doses of opioids suggest more intense pain and that severe pain intensity is the actual risk factor for suicide; in other words, higher opioid doses simply serve a marker for greater pain levels. Of course, high opioid doses do not necessarily mean that the patient had his pain fully controlled; because of pain severity, individual responses to opioids, opioidinduced hyperalgesia, and tolerance, patients on high-dose opioid therapy may still suffer uncontrolled pain. High-dose opioid therapy may be a marker for patients who lacked access to other pain-control options, such as cognitive or behavioural therapy, physical therapy, or multidisciplinary pain treatments. It has also been speculated that high doses of opioids decrease inhibitions in individuals who already harboured suicidal thoughts. Finally, giving patients in pain high doses of opioids provides them with ready access to a means to kill themselves by overdose, if they are so inclined [117]. Opioid analgesics are among the most frequently used substances to carry out suicide by overdose [118][119]. Thus, high-dose opioid therapy may be a risk factor for suicide in chronic pain patients.
Opioid use may be associated with distressing side effects that cause depressive symptoms. Sexual dysfunction and opioid-induced hormone Table 3. In a retrospective VA study, higher doses of prescribed opioids could be correlated with increased suicide mortality [117] deficiency are common in patients on long-term opioid therapy [120]. Opioids suppress the gonadotropin-releasing hormone, which causes the body to produce insufficient amounts of sex hormones, in some cases leading to depressed libido and/or opioid-associated hypogonadism [121]. The mechanism behind this action is the suppression of the hypothalamic-pituitary-gonadal and hypothalamic-pituitary-adrenal axes with resulting decreased levels of testosterone, follicle-stimulating hormone, luteinizing hormone, and dehydroepiandrosterone [122]. Testosterone therapy may be prescribed to address symptoms associated with quality of life, such as mood, sexual function, energy level, and libido, although the risks of this therapy may outweigh benefits [122]. Thus, depressive symptoms in opioid patients may in some cases trace back to organic factors, that is, mood disorders secondary to a medical condition.
In this connection, buprenorphine deserves to be specially noted. In a multisite, randomized, double-blind, placebo-controlled trial, suicidal patients without substance abuse were assigned randomly to receive either ultra-low-dose sublingual buprenorphine (initial dose was 0.1 mg once or twice a day, mean final dosage was 0.44 mg/ day) or placebo in addition to other ongoing treatments. At two and four weeks, the low-dose buprenorphine patients scored significantly lower on the Beck Suicide Ideation Scale than placebo patients (mean difference -4.3, 95% confidence interval [CI], -8.5 to -0.2 at two weeks and -7.1, 95% CI, -12.0 to -2.3 at four weeks) [123]. The concomitant use of antidepressants did not affect this response to buprenorphine. These patients were not chronic pain patients, so it is unclear how lowdose sublingual buprenorphine would affect the chronic pain patient population with suicidal ideation.
Chronic pain patients may face unique challenges in terms of suicidal ideation as access to opioid pain medications becomes increasingly more difficult. In an online survey of fibromyalgia patients (n=6,420), 27.2% (n=1,462) reported having thoughts of suicide since hydrocodone was rescheduled to a more restrictive (and less accessible) drug category [124]. Although data are incomplete for all chronic pain patients who rely on opioid analgesia, anecdotal reports suggest that many pain patients are deeply concerned that they might soon be denied adequate pain control.
Suicide risk in specific chronic conditions
In a retrospective review of 1,069 cases of suicide in which some decedents specified why they killed themselves, about 48% gave a nonmedical reason (financial distress, relationship problems, and so on), 33% stated a mental health reason, and 19% had a physical complaint [125]. The most commonly cited physical disorders leading to suicide were cancer (33%), chronic pain (30%), cardiovascular disorders (28%), and metabolic disorders (25%). Compared to those who committed suicide for mental health reasons, those who committed suicide on account of physical disorders tended to be older, male, and single or living alone [125]. While chronic pain was mentioned as a individual, specific cause for suicide, cancer, heart disease, and metabolic disorders may also be associated with pain. Of the decedents who gave mental health disorders as their reason for suicide, autopsies and evaluations of medical records revealed that many had concomitant physical problems, including chronic pain (19%), cancer (8%), heart disease (4%), metabolic disorder (31%) and other conditions, although these conditions were not mentioned as contributing factors to suicide. Individuals who had physical or mental health disorders the longest were most vulnerable: 61% and 71% of those with physical or mental health disorders, respectively, had their condition for at least two years before the suicide [125]. Certain painful conditions may have more direct associations with suicide than others. A brief summary appears below.
Arthritis
Arthritis is characterized by chronic musculoskeletal pain. The evidence for elevated suicide risk in arthritis patients is mixed. In a study of 21,744 individuals, those with arthritis were at greater risk for attempted suicide than those without arthritis (odds ratio 1.46) [126]. The prevalence for attempting suicide at least once in a lifetime was significantly higher in people with arthritis than those without arthritis, for both men (3.9% vs. 2.0%, respectively, p<0.001) and women (5.3% vs. 3.2%, p<0.001) [126]. Those arthritis patients who attempted suicide were more likely to be younger, poorer, less educated (high school dropout), be a substance abuser, have a history of anxiety and/ or depressive disorders, and report pain intensity levels of moderate to severe [126]. However, in a retrospective study based on the National Death Index and treatment records obtained from the Department of Veterans Affairs Healthcare System for 2005-2006 (n=4,863,086) arthritis was not associated with increased suicide risk, unlike certain other painful conditions [127].
Back pain
In the Veterans study (n=4,863,086) described above, back pain in specific was significantly associated with an elevated risk of suicide (hazard ratio 1.33, 99% CI, 1.22 to 1.45) [127]. Back pain may confer more of a risk on older than younger patients. In a study of 2,310 suicides committed in Finland from 1988 to 2007, investigators compared those individuals with diagnosed back pain (n=133) and those with musculoskeletal pain other than back pain (n=357) to those having no history of any back pain or musculoskeletal disorders (n=1,820). In this study, individuals with back pain who committed suicide were 11 years older than reference patients [128].
Cancer
Overall, cancer patients have double the risk of committing suicide than healthy individuals [125]. Suicidal ideation among older adults has been associated with many types of cancer, specifically, cancers of the lung, gastrointestinal tract, breast, genitals, bladder, and lymph nodes [129]. Suicidality among cancer patients may be associated with pain but might also involve poor prognosis, dysfunction, depression, and a sense of hopelessness [129].
Fibromyalgia
A prospective Danish study of patients with confirmed or suspected fibromyalgia (n=1,361) found no increased mortality in this population but did confirm an increased risk of suicide in female (but not male) patients [130]. The standardized mortality rate (SMR) in the overall study population was 1.3 (95% CI, 0.9 to 1.8) but the SMR for suicide was 10.5 (95% CI, 4.5 to 20.7). In the subset of patients with a confirmed diagnosis of fibromyalgia (n=1,132) SMR was 6.5 (95% CI, 1.8 to 6.7) but it was higher in those with suspected but unconfirmed fibromyalgia (n=106) at 19.6 (95% CI, 2.2 to 70.8) [130]. A possible explanation why the SMR is higher in those with unconfirmed fibromyalgia is that an undiagnosed chronic painful condition may be more frustrating and distressing than a definitive diagnosis, which would likely lead to specific treatment options or at least lend the symptoms credibility.
Similar findings occurred in a later study of 8,186 fibromyalgia patients, of whom 81% reported widespread pain at baseline [131]. There was no difference in the SMR of this patient population compared to the general US population (stratified by age and sex), although fibromyalgia patients had both a higher rate of suicide (4.4% compared to 1.4% of the general population, SMR 3.31 [95% CI, 2.15 to 5.11]) and a higher rate of accidental death (7.1% versus 5.0% in the general population, SMR 1.45 [95% CI, 1.02 to 2.06]) [131]. It is possible that some of the accidental deaths may have been unrecognized suicides.
Headache and head pain
In a study based on the National Comorbidity Survey-Replication, multivariate models were used to adjust for concurrent psychiatric disorders and chronic medical conditions. Over the 12-month study period, suicidal ideation was most associated with head pain (odds ratio 1.9, 95% CI, range 1.2 to 3.0) as was attempted suicide (odds ratio 2.3, 95% CI, range 1.2 to 4.4) [132]. In a study of 4,863,086 American veterans, migraine was associated with an elevated risk of suicide (hazard ratio 1.34, 95% CI, 1.02 to 1.77) [127]. Lifetime headache frequency (all types of headache) was significantly greater in geriatric individuals (≥65 years, excluding dementia patients) with a lifetime history of suicide attempts (odds ratio 1.92, range 1.17 to 3.15) [133].
Inflammation
Many psychiatric disorders have been associated with inflammatory processes and related aberrant cytokine levels. In a meta-analysis of 18 studies (n=845 patients) levels of interleukin-1β (1L-1β) and interleukin-6 (IL-6) were found to be significantly higher in the serum and post-mortem brain samples of mental health patients with suicidality compared to mental health patients without suicidality (p<0.05) and compared to healthy control subjects (p<0.05) [134]. Cerebrospinal fluid levels of IL-8 were significantly lower in suicidal patients versus healthy controls (p<0.05) [134]. Inflammation has also been associated with suicidality in MDD patients, specifically decreased IL-2 levels [135][136].
Vitamin D plays a role in immune support by helping to promote T-helper-2 (TH-2) phenotypes, may be inversely correlated with inflammation, and has been implicated in suicidality. In a study of 90 individuals who had either attempted suicide or were depressed but had not attempted suicide, versus healthy control patients, lower mean levels of vitamin D were associated with attempted suicide. In fact, 58% of those who had attempted suicide were clinically deficient in vitamin D [137]. Note that this study included, but was not limited to, pain patients.
Neuropathy
Neuropathy can be extremely challenging to treat but in a Veterans' , study described earlier, (n=4,863,086) it was not associated with an increased risk of suicide [127].
Obesity
Obesity is a recognized risk factor for pain, including chronic pain, but it has not emerged as a clear risk factor for suicidality [138]. From a retrospective study of 4,005,640 patients in 2001-2002 obtained using the VHA database, body mass index (BMI) was inversely associated with increased risk of suicide. In this patient population, 1.3% were underweight, 24.3% were normal weight, 40.6% were overweight, and 33.8% were obese. Compared to normal-weight subjects, underweight individuals were at a higher risk of suicide (adjusted hazard ratio 1.17, 95% CI, 1.01 to 1.36) but overweight and obese subjects had lower risks of suicide (adjusted hazard ratios 0.78 and 0.63, respectively, 95% CI, ranges 0.74 to 0.82 and 0.60 to 0.66, respectively) [139]. On the other hand, in a Canadian study (Canadian Community Health Survey Cycle 1.2 data, n=36,984), obesity was associated with both increased risk of lifetime psychiatric disorders as well as lifetime suicidal ideation and attempted suicide [140].
Substance abuse
A 2005-2006 VHA study (n=4,863,086) reported that men and women were both at increased risk for suicide if they were currently suffering from a substance abuse disorder involving alcohol, cocaine, cannabis, opioids, amphetamines, or sedatives [141]. Those who attempted suicide were more likely to abuse drugs and/or alcohol [142]. Prolonged substance abuse can trigger in the individual a cascade of serious financial, legal, domestic, and social problems which can act as powerful stressors [143]. Furthermore, many substances reduce inhibition and enhance impulsivity, and suicide may be viewed in some contexts as an impulsive act [143]. A study of 113 patients who attempted suicide found that the majority (70%) made their most serious attempt at suicide during a period of heavy alcohol intake [144]. Factors that helped to identify these individuals who tried to commit suicide during a drinking binge were male gender, younger age, and greater degree of alcohol dependence [144].
In alcohol-dependent patients (n=366, 74% men) who had exhibited at least one suicidal behavior, physical pain could be associated with a lifetime history of suicidality [145]. In a case-comparison study from Sweden of people ≥ 70 years treated in a hospital for attempted suicide (n=103) compared to randomly selected subjects of similar age, alcohol use disorder was observed in significantly more patients who attempted suicide (26% vs. 4%, odds ratio 10.5, 95% CI, 4.9 to 22.5) [146].
Discussion
Suicidality is a complex phenomenon and can be associated with both chronic pain and mental health disorders. The association between suicide and chronic pain may not be as simple as a patient who simply has too much pain to bear, but rather may involve the multidimensional aspects of the pain experience, neural networks, and the interplay of other conditions such as substance abuse, inflammation, depression, disease processes, age, pharmacological therapy, and so on.
Since chronic pain patients are often treated with opioid analgesics, the recent public health discussions about limiting opioid access may be viewed by this population as particular stressors. In a survey of 6,420 fibromyalgia patients taken in the first 100 days following the rescheduling of hydrocodone from Schedule III to the more restrictive Schedule II by the Drug Enforcement Administration, 27.2% of fibromyalgia patients reported having suicidal thoughts (compared to 4.4% of fibromyalgia patients in general) [124]. Tighter restrictions on pain relievers, particularly with the upshot that patients may no longer have access to effective pain control, may exacerbate frustration, despair, and even suicidality in this vulnerable population.
Some of the findings in this narrative review may be surprising. While headache is generally not considered a life-threatening or even serious medical problem, migraines are more associated with suicidality than many other chronic pain conditions. This supports the notion that migraine patients must be more effectively treated. On the other hand, neuropathic pain, which can be severe and is particularly challenging to treat, does not appear to elevate the patient's risk for suicide. Back pain is associated with an increased risk for suicide, but evidence is mixed for other types of musculoskeletal pain syndromes. Mental health disorders and chronic pain may involve some of the brain's same neural networks, which supports the notion that these conditions have far more overlap than currently recognized.
It may behove clinicians to consider suicidality when dealing with chronic pain patients, particularly those patients with mental health comorbidities. Chronic pain patients may be extremely vulnerable to stressors, such as family stress, financial burdens, or fears for their future. Even chronic pain patients who are functional and effectively managed with analgesics may be vulnerable to stress about the possible loss of access to pain medicine. Furthermore, patients suffering from chronic pain may experience stress and frustration when their complaints are not believed by clinicians, employers, or family members. In a series of interviews with eight chronic pain patients, six reported having the experience of not being believed or dismissed by clinical staff [147]. Interestingly, the two patients in this study whom clinicians found credible had physical disabilities which made their painful symptoms more obvious. In some cases, pain medications were withheld. Patients in this situation can feel hurt, estranged, angry, and unfairly judged, all of which are powerful stressors in a vulnerable population with elevated risk of suicidality. One of these patients said that she was contemplating suicide-not specifically because of her pain but because no one took her seriously [147]. Of course, clinicians must exercise clinical discernment, in that drug seekers and malingerers are also encountered in practice.
Finally, there are limitations inherent to this sort of paper. This is a narrative review intended to address the broad issues of the topic rather than provide the depth of a systematic review or meta-analysis. Suicide overall is likely under-reported in that some overdose or traumatic deaths ruled accidental may have been suicide. Certainly, suicidal ideation and attempted suicide are under-reported because it is likely many people would not admit these things. It may also be that even when a person commits suicide and leaves a note stating reasons that these reasons may be poorly articulated, incomplete, or somewhat inaccurate. The risk of suicide in the chronic pain population is real and important, and there remains much more to learn about this subject.
Conclusion
Chronic pain patients have at least twice the risk of suicide than non-chronic pain patients, but these risks may be far more complex and multifaceted than simply unendurable pain. Dual diagnosis or the concomitant presence of mental health disorders and chronic pain likely exacerbates the risk of suicide. Evidence suggests depression and chronic (but not acute) pain may share some of the same neural networks. Certain conditions (such as migraine) may put a chronic pain patient at heightened risk for suicide although migraine is not itself a fatal condition. While severe pain increases the risk of suicide more than mild pain, there is no clear dose-dependent relationship between incremental steps in pain intensity and suicidality. Chronic pain patients are a vulnerable population, and it may be clinically important to consider their risks of suicidality during treatment. | 2019-03-18T14:04:08.543Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "33eab555a91a24e17863f61afec3118a144e471a",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/NPC-3-189.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "52ff771a4e52035bc211a2dd49e4f029e159e508",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236717160 | pes2o/s2orc | v3-fos-license | The REACT-Based Contextual Learning Model And Mastering The Concept Of Physics In High School Students
This study aims to determine the effect of the REACT-based contextual learning model on the mastery of physics concepts in high school students. This study used a Quasi Experimental research model with a pretest-posttest control group design. The study population was all students of class XI MIA SMA Negeri Samadua, South Aceh Regency which consisted of two classes. Both classes were used as research samples by forming an experimental group and a control group. The experimental group is a class that uses a REACT-based contextual learning model in physics learning, while the control group is a class that uses a conventional learning model through the lecture method. The instruments used were RPP, LKPD and test questions. The pretest and posttest in this study were used to measure the students' initial and final abilities in physics. The data analysis technique used is the t-test with a significance level of 5%. The results of hypothesis testing obtained tcount (2.13) > ttable (1.68), meaning that Ho is rejected and Ha is accepted. These results interpreted that there was a significant influence on the REACTbased contextual learning model on the mastery of physics concepts for students of class XI MIA SMA Samadua, South Aceh Regency.
INTRODUCTION
The implementation of learning in the classroom is one of the main tasks of the teacher. In the learning process there is still a tendency to minimize student involvement. The dominance of the teacher in the learning process causes the tendency of students to be more passive so that they are still waiting for the teacher's presentation rather than looking for and finding the knowledge they need themselves (Bustami, B., et.al., 2020). Even though the 2013 curriculum views that knowledge cannot just be transferred from teachers to students. Students are subjects who have the ability to actively seek, process, construct, and use knowledge (Merisa, N. S., et.al., 2020). The interaction in the learning process between students and educators and learning resources in a learning environment is regulated and supervised so that learning activities are directed in accordance with educational goals that serve to assist students in developing all their potential, skills, and personal characteristics in a positive direction. Efforts to make this happen, every teacher is expected to be able to plan, implement and evaluate the learning process and student learning outcomes using good methods (Ibrahim & Yusuf, 2019).
Based on the results of preliminary observations by researchers at SMA Negeri Samadua, South Aceh Regency, information related to the learning process and learning resources used by the teacher were textbooks (student books) from publishers and material summaries made in the form of power points, which according to some students the physics material presented was still difficult to understand. Experimental or experimental activities are also rarely carried out. According to (Ahman & Mursalin, 2018), the physics learning process in schools still uses a lot of lecture methods and physics subject matter seems only to be memorized. Students tend to be passive in the learning process and are unable to construct the knowledge they acquire. A wise teacher will choose, sort and determine a method or model that is suitable for learning materials to create conducive classroom situations and conditions (Al-Tabany, 2014).
Physics learning is not only limited to learning facts and theories, but requires investigative activities to find new facts, either through observation or experiment, which involves process skills based on scientific attitudes (Selamet et al., 2013). Physics learning also has objectives including developing students' knowledge, understanding, and analytical skills towards the environment and surroundings (Cahyono et al., 2017). Learning physics in students is expected not only to master the concepts but also to apply the concepts they have understood in solving physics problems in everyday life (Azizah et al., 2015). The difficulty of students understanding or mastering the physics material presented by the teacher at SMA Samadua, South Aceh Regency can be seen from the minimum completeness criteria determined by the school, which is 70.This is in accordance with the results of research (Lefrida, 2016) which concluded that students have difficulty understanding the concept in particular physics subjects, causing many students to have low learning outcomes and do not reach the minimum completeness criteria.
Problem of Research
Based on this problem, the researcher suspects that the learning process used, the method or learning model chosen has not been effective for students' mastery of physics concepts and relates them to everyday life, so that an effort is needed to apply a learning model for students that is thought to improve learning outcomes. and mastery of concepts. According to (Ismaya et al., 2015), the learning model chosen must be able to help teachers instill concepts in students by inviting them to discover the concepts they are learning, work together, apply these concepts in everyday life and transfer in new conditions. It is thought that the contextual learning model can help to overcome the problem of mastery of concepts in students because this learning model involves students in important activities that help them relate subject matter to the real life context they face (Selamet et al., 2013). Furthermore, according to (Nisa et al., 2018), contextual learning offers learning that further emphasizes students' abilities and relates material to everyday life. Contextual learning models can be developed based on REACT with the aim of improving understanding of concepts and learning outcomes (Aqib, 2015).
The use of the REACT-based learning model in learning must go through five stages, namely relating, experiencing, applying, cooperating, and transferring (Choiriyah, 2017;Nisa et al., 2018). The REACT learning model has high effectiveness in developing students' conceptual understanding. Through this learning model students are also given the opportunity to develop and practice science process skills optimally (Fakhruriza & Kartika, 2015).
The REACT-based contextual learning model not only teaches concepts and facts but directs students to find meaning in learning through activities relating to the concept of subject matter with everyday life (Musdalifah, 2013). This statement is supported by the results of research conducted (Putra et al., 2014) showing that there are significant differences in mathematics learning outcomes between students who are taught with the REACT strategy and students who are taught using conventional learning models.
Research Focus
Based on the description of the research problems that have been discussed, this study only focuses on learning physics on the subject of global warming in class XI MIA. This study aims to determine the effect of the REACT-based contextual learning model on the mastery of physics concepts in high school students.
General Background of Research
This study used a Quasi Experimental Design research model. Quasi-experimental research functions to determine the effect of the experiment / treatment on the characteristics of the subject desired by the researcher (Mulyatiningsih & Nuryanto, 2014).
The research was conducted at SMA Negeri 1 Samadua, South Aceh Regency. The research was conducted in the second semester of the 2018/2019 academic year, starting on April 5-23 2019. The study population was all students of class XI MIA SMA Negeri Samadua which consisted of two classes, 20 students of class XI MIA-1 and class XI MIA-2 as many as 20 students. Because the total population consists of only two classes, these two classes are at the same time a research sample, in which class XI MIA-1 students are designated as the experimental class and class XI MIA-2 as the control class.
Subject of Research
The experimental class is a class that uses a REACT-based contextual learning model in physics learning and the control class is a class that uses a conventional learning model through the lecture method with a total of 20 students. The pretest and posttest in this study were used to measure the students' initial physics and final physics abilities.
Instrument and Procedures
The instruments used in this study were treatment instruments and measurement instruments. The treatment instruments in this study include the implementation of learning design (RPP) and student discussion sheets (LDPD). Posttest questions are an instrument used to measure students' mastery of physics concepts in the form of 10 multiple choice questions and 5 essay questions. This test is carried out after students take part in the learning process for the subject of global warming.
Data Analysis
Before testing the hypothesis, a prerequisite test is carried out including the normality test and the homogeneity test. Furthermore, the t-test (t-test) was used with a significance level of 5%. This test aims to determine the significance of the students' mastery of physics concepts in the experimental class compared to the control class as measured by the posttest value data. The test criteria used in this study are reject H0 if tcount> ttable and accept Ha. In this case, it means that the average value of the experimental class students is better than the control class students' concept mastery average (Sugiyono, 2019).
RESULTS AND DISCUSSION
From the results of processing normality test data for the experimental and control groups at the pretest (6.52 and 5.15) and posttest (7.52 and 1.44), the distribution of sample data for each group is normally distributed so that data analysis can be continued. Likewise, the homogeneity test of the value of mastery of physics concepts in the experimental class and control class Fcount <Ftable or 1.06 <2.15 is homogeneous so that the analysis can be continued. The implementation of learning in the experimental class and the control class is in accordance with the predetermined learning stages, while seen from the student activities during the learning process, it shows that the learning process in the experimental class is better than the learning process in the control class.
Mastery of physics concepts in this study can be seen from the pretest and posttest scores of the experimental class and the control class. Table 1 shows the pretest and posttest mean scores of the experimental class and the control class. From these results, it shows that the average value of the experimental class increased by 53.35 while the control class only increased by 45. Although the average scores of the two classes showed the same score reached KKM, the experimental class experienced a higher increase than the class. control. This shows that the treatment of the learning model affects student learning outcomes. The results of this study are supported by research results (Fatmala et al., 2016) which state that the implementation of the REACT contextual learning model can improve students' problem-solving abilities and research results (Safitri & Mahmudi, 2017) which states that contextual learning with the REACT strategy is effective in reviewing from student achievement.
Based on the results of the study (Jannah & Supardi, 2020) it is stated that the guided inquiry model with the REACT strategy can improve learning outcomes because with the REACT strategy students try to find the concepts being taught, try to understand the concepts, work together, and apply the knowledge that is being taught. obtained in real life (Putri & Saputro, 2019) and in REACT, students try to find a meaningful relationship between abstract ideas and practical applications in the real world (Fauziah, 2020).
CONCLUSIONS
Based on the results of data analysis and discussion, it can be concluded that there is a significant effect of the REACT-based contextual learning model on the mastery of physics concepts in class XI MIA SMA Samadua, South Aceh Regency. Through the findings of this study, the application of the REACT-based contextual learning model can be used in other physics subject matter in the hope that it can improve students' mastery of physics concepts. | 2021-08-03T00:04:02.982Z | 2021-04-12T00:00:00.000 | {
"year": 2021,
"sha1": "5ad380934b31fda2375a1e727d2d82afdb5affeb",
"oa_license": "CCBYSA",
"oa_url": "http://jurnal.unsyiah.ac.id/AJSE/article/download/19897/13760",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "46c2f7240e2c960ca964be018540e36678b002d0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
59402512 | pes2o/s2orc | v3-fos-license | Periodic solution of stochastic process in the distributional sense
In this paper, we aim to study a stochastic process from a macro point of view, and thus periodic solution of a stochastic process in distributional sense is introduced. We first give the definition and then establish the existence of periodic solution on bounded domain. Lastly, for the case that probability density function exists, we obtain the existence periodic solutions of the probability density function corresponding to the stochastic process by using the technique of deterministic partial differential equations.
Introduction
Some properties of a stochastic process are worth being studied, such as the long time behavior, periodicity, ergodicity and so on. There are many classical theories, see the books [3,4,5]. In this paper, we will give a new viewpoint about the periodicity of stochastic processes. We consider the stochastic process from another fact-a macro point of view. We do not consider the motion of a single particle, while we are concerned with the motion the entire system. It is well-known that the probability density function (PDF) of a stochastic process can describe the entire distribution of a system. Hence, in this paper, we consider some property of entire system-time-periodicity of PDF. The density of a stochastic process is called as Fokker-Planker equation or Kolmogorov equation, which has been studied in [6]. Now, we consider the multidimensional Fokker-Planck equation for the following SDE where b is an d-dimensional vector function, σ is an d × m matrix function and B t is an mdimensional Brownian motion, see Page 99 in [8]. Let the probability density function of (1.1) be p(t, x) if it exists, then we deduce that p(t, x) satisfies ∂ t p(t, x) = 1 2 div(div(σσ T p)) − div(b(t, x)p) (1.2) with initial data In this paper we mainly consider the property of p(t, x). There is a fact that for most of stochastic process, the PDF may not exist. Therefore, in this paper, we first give the notion of periodic solution in distributional sense for discrete time and continuous time stochastic process and then consider a special case, in which the PDF exists. Let us recall some development of periodic solutions. Periodic solutions have been a central concept in the theory of the deterministic dynamical system starting from Poincaré's work [23]. For a random periodic dynamic system, to study the pathwise random periodic solutions is of great importance. Zhao-Zheng [30] started to study the problem and gave a definition of pathwise random periodic solutions for C 1 -cocycles. Recently, Feng et al. did some beautiful work about the periodic solutions, see [10,11,12]. Noting that the definition of periodic solution in [10,11,12,30] is different from here, we consider the time-periodicity of entire system in distributional sense. During we prepare our paper, we find the paper of Chen et al. [7], where the existence of periodic solutions of Fokker-Planck equations is considered. They obtained the desired results by discussing the existence of periodic solutions in distributional sense for some stochastic differential equations (SDEs). More precisely, they used the properties of solutions of SDEs to study the properties of solutions of Fokker-Planck equation. They obtained the time-periodicity of PDF in the whole space. We will give another proof in viewpoint of PDEs. Moreover, the definition of periodic solution to discrete time and discrete state stochastic process will be given. The topic of periodic solutions to stochastic process in distributional sense on bounded domain is also considered in this paper. About the almost periodic solution, see [21,27].
The rest of this paper is arranged as follows. In Sections 2, we present some known results on PDEs' theory. In Sections 3, we first give some definitions of periodic solutions to stochastic process in distributional sense, then establish the existence of periodic solution on bounded domain by using the method of [7]. For the case that the PDF exists, we obtain the existence of periodic solution of Fokker-Planck equations on bounded domain and in the whole space by using the method of deterministic partial differential equations in Section 4.
Some known results
In this section, we recall some known results about existence of PDF of the diffusion Itô process and the existence of periodic solution of parabolic equations.
Consider a Markov process in R d with transition probabilities P (s, x, t, B) (B is a Borel set in R d ) is called a diffusion process or a diffusion if there is a mapping b : R d × [0, ∞) → R d , called the drift coefficient, and a mapping (x, t) → A(x, t) with values in the space of symmetric operator on R d , called the diffusion coefficient or diffusion matrix, such that (i) for all ε > 0, t ≥ 0 and x ∈ R d we have lim h→0 h −1 P (t, x, t + h, V (x, ε)) = 0, (ii) for some ε > 0 and all t ≥ 0, x ∈ R d we have U (x,ε) (y − x)P (t, x, t + h, dy) = b(x, t), (iii) for some ε > 0 and all t ≥ 0, x, z ∈ R d we have y − x, z P (t, x, t + h, dy) = 2 A(x, t)z, z , where U (x, ε) = {y : |x − y| < ε} and V (x, ε) = {y : |x − y| > ε}. If A and b do not depend on t, then the diffusion is homogeneous. Bogachev et al. [6] obtained the following proposition.
Proposition 2.1 Suppose that relations (i)-(iii) hold locally uniformly in x and the functions a ij , b i (A = (a ij ), b = (b j )) are locally bounded. Then the transition probabilities satisfy the parabolic Fokker-Planck-Kolmogorov equation in the sense of generalized functions. If ν is a finite Borel measure on R d and then the measure µ = µ t (dx)dt gives a solution to the Cauchy problem with the initial condition ν| t=0 = ν.
The above proposition is concerned with the case of measure-valued solution about the Fokker-Planck equation. The next result shows that there exists PDF for a stochastic process under some assumptions. Let D T = Ω × (0, T ), where D ⊂ R d is an open set and T > 0 is a fixed number. Bogechev et al. [6] obtained the following result.
Proposition 2.2 [6, Theorem 6.3.1] Let µ be a locally finite Borel measure on D T such that a ij ∈ L 1 loc (D T , µ) and for all nonnegative φ ∈ C ∞ 0 (Ω T ). Then the following assertions are true.
(ii) If, on every compact set in D T , the mapping A is uniformly bounded, uniformly nondegenerate, and Hölder continuous in x uniformly with respect to t, then µ = ρdxdt, where ρ ∈ L r loc (D T ) for every r ∈ [1, (d + 2) ′ ).
The above Proposition is the existence of probability density function in the whole space. Now, we consider the bounded domain. As stated in [8], in the simulations, we have to take x in a large but bounded domain D ⊂ R d and we could impose absorbing boundary condition on ∂D, i.e., as long as a "particle" or a solution path reaches the boundary, it is removed from the system. The above assumptions implies the following system Due to the absorbing boundary condition, the particle will not come back when it reach the boundary. Thus under the absorbing boundary, it is impossible to get the existence of periodic solution to (2.1). Therefore, we must consider another case: the reflecting boundary condition [14, Section 5.1.1]. The Fokker-Planck equation of (1.1) can be written as The reflecting boundary condition means particles or solution paths can not leave a bounded domain D, and hence there is zero net flow of p crossing the boundary ∂D. Thus we impose the following reflecting boundary condition Integrating (2.2) over D and using the boundary condition (2.3) together with the divergence theorem, we have conservation of probability In this case, it is possible to obtain the existence of periodic solution to (2.2)-(2.3) with initial data. In order to obtain the desired results, we recall some results about the periodic parabolic equations, see [15]. Now, we consider the periodic-parabolic eigenvalue problem where A(t) is a uniformly elliptic differential operator of second order depending T -periodically on t, i.e., and Bu = u Dirichlet b.c., ∂u ∂ν + b 0 (x)u N eumann or regular oblique derivative b.c..
We say µ ∈ C (C denotes complex value) is an eigenvalue if there is a nontrivial solution u (eigenfunction) of (2.4). We search in particular for an eigenvalue µ ∈ R having a positive eigenfunction ("principal eigenvalue" µ).
In order to establish the existence of solutions of (2.4), we consider the inhomogeneous linear evolution equation where f ∈ C θ ([0, T ], X), 0 < θ ≤ 1 and X is a Banach space. Assume the closed linear operator A in X satisfies (i) dom(A) := dom(A(t)) is dense in X and independent of t, (ii) {λ ∈ C : Reλ ≤ 0} ⊂ ρ(A(t)), ∀t ∈ [0, T ], (ρ(A(t)) denotes the resolvent set of operator A(t)), , ∀λ ∈ C, Reλ ≤ 0, ∀t ∈ [0, T ]. Set A := A(0) and take the fractional power spaces X α with respect to A. Assume further (iv) A(·) : [0, T ] → L(X 1 , X) is Hölder continuous. It follows from the results of Sobolevskii [25] that there exists a unique solution u of (2.5) with Moreover, there exists the evolution operator such that the solution of (2.5) can be represented in the following form The function U is strongly continuous on the set △ : U (·)u 0 ∈ C(△, X) for each u 0 ∈ X, and satisfies Assume the following conditions hold: (A) A(t) is uniformly elliptic for each t ∈ R and T -periodic in t, of given period T > 0. More precisely, assume the coefficient functions a jk = a kj , a j , a 0 belong to the space We keep B = B(x, D) independent of t ∈ [0, T ], such that the operator A(t), the realization of (A(t), B) in L p (D) (N < p < ∞) has domain independent of t. We assume that Then {A(t) : 0 ≤ t ≤ T } satisfies the hypotheses (i)-(iv). Thus, by the results of Sobolevskii [25], we get the existence of evolution operator U (t, s) for 0 ≤ s ≤ t ≤ T . Now, we give the relation between the solutions of (2.4) and (2.5) with f = 0. We have the following proposition about the positivity of µ.
Proposition 2.4 Assume (A) holds. Assume further that the zero-order term of A(t) satisfies a 0 ≥ 0 onD × R, and that Then 0 < r < 1.
Definitions of periodic solutions in distributional sense
In this section, we give some definitions of periodic solutions in distributional sense, including discrete time and discrete state stochastic process (also called stochastic sequence) and continuous time and continuous state stochastic process. We start to consider the discrete time and discrete state stochastic process. Suppose a stochastic sequence {X n } n≥1 defined on a complete probability space has a one-step transition probability matrix P . Following the Chapman-Kolmogorov equation, we have the N -th step transition probability matrix P (N ) satisfying P (N ) = P · P (N −1) = · · · = P N . Now, we suppose each particle has m state in a particle system and the particle system has an initial distribution (x 0 1 , x 0 2 , · · · , x 0 m ) T . Consider the distribution of the system after being transferred Therefore, if the following holds then the particle system turn back to the initial distribution. We give the first definition of periodic solution in distributional sense.
Definition 3.1 (discrete time and discrete state stochastic process) Suppose a particle system has one-step transition probability matrix P and contains m states with the initial distribution (x 0 1 , x 0 2 , · · · , x 0 m ) T . If there exists a positive constant N ∈ N such that 1) then the particle system is called N -periodic system in distributional sense.
One can give some examples to satisfy (3.1). For example, suppose a particle system has five states and the initial distribution is ( 1 10 , 1 10 , 7 20 , 2 5 , 1 20 ) T . Assume that the one-step transition probability matrix is On the other hand, it is easy to see that if where I m denotes m × m identity matrix (which is called Idempotent matrix in algebra), then the equality (3.1) holds. A stochastic process is called strong N -periodic system in distributional sense if (3.2) holds. We remark that the number N in (3.2) is definitely equal to least common multiple of the periodicity of every particle. For continuous time and continuous state stochastic process, we borrow the idea of [6,7]. A stochastic process is called T -periodic system in distributional sense if µ(t + T, x) = µ(t, x) for all t ≥ 0 and x ∈ R d , where µ is defined as in Proposition 2.1.
Before we close this section, we establish the existence of periodic solution in distributional sense on bounded domain. We generalize the result of [7] to the bounded domain. We remark that the boundary of the bounded domain should be reflective. If the boundary is absorbing, then we cannot get the limit in the following sense where µ n and µ are probability measure of some stochastic process on the bounded domain D ⊂ R d . The probability measure considered here keeps entirety, i.e., µ n (D) = 1 and the limit probability measure µ 0 (D) = 1. The results obtained here coincide with those in next section.
Let D be a convex domain in R d and (Ω, F, P ) be a complete probability space with an increasing functions, both being defined on R + ×D, respectively. Consider the stochastic differential equation with reflection In [20], the authors gave the relationship between Φ and X, i.e., where ν is the unit outward normal to ∂D at x, and k t stands for the total variation of k on [0, t].
In order to make the meaning of Φ t clearly, we introduce the following spaces of functions, see [26, Page 164] for more details.
On C(R + , R d ) and C(R + ,D) we consider the compact uniform topology. Given a function ξ in D(R + ,D), a function Φ is said to be associated with ξ if the following three conditions are satisfied.
Using the above properties, Tanaka proved that the following Lemma.
,w(0) ∈D, and ξ,ξ be any solutions of respectively. Then we have Tanaka [26] obtained the following result.
Later, Lions-Sznitman [20] generalized the results of [26]. Now, we follow the idea of [7] to prove the existence of periodic solution in distributional sense on bounded domain. Note that the bounded domain with reflection boundary is similar to the whole space, the proof is similar to that of [7]. We only write out the difference. Due to that nothing is lost in the bounded domain, so the probability measure onD will be always 1. Using this fact, we can obtain a similar theorem on bounded domain to [24,Theorem Page 9]. And thus Lemmas 2.3 and 2.4 in [7] hold for the bounded domain. Let P(D) be the set of Borel probability measures onD. We denote the law of X onD by µ : R → P(D). Assume there exists a stochastic process L such that the solution Y (t) on R + of (3.3) satisfying We borrow symbols from the [7].
Similar to Section 2 of [7], we define the d BL which means the distance of bounded and Lipschitz function.
for all µ, ν ∈ P(D) and all Lipschitz continuous real-valued functions h onD. It is easy to check that (d BL , P(D)) is a complete metric space, see [9, Page 390] for details. The main result is the following theorem.
where Y (t) is a solution of (3.3), A m is defined as in (3.7) and {n k } is a sequence of integers tending to +∞ and d BL is a metric, then there exists an L 2 -bounded T -periodic solution in distribution sense of (3.3).
Proof. Inspired by [7], define the stochastic process where ω ∈ Ω, χ k is a random variable independent of B t and Y (0, ω) such that P (χ k = nT ) = 1 k+1 , n = 0, 1, · · · , k. Due to the functions b and σ are T -periodic in time variable and the fact that Φ t just depends on X t , X k is still a solution of whereB t has the same distribution with B t . Similar to [7], using the fact that χ k is independent ofB t , we have where A 0 ⊂D is a Borel set. It follows from (3.4), (3.6) and Chebyshev's inequality that Applying Skorokhod' Lemma ([7, Lemmas 2.3 and 2.4]), we have that in some probability space (Ω,F ,P ) there exists a sequenceX k (0,ω) (k = 0, 1, · · · ) with the same distribution as X k (0, ω) such that some subsequence {X n k (0,ω)} k=0,1,··· converges in probability toX k (0,ω). Also, we can construct random variables X k (ω) and X(ω) on the space (Ω, F, P ), whose joint distribution is the same as the joint distribution ofX k (ω) andX(ω). Notice thatX n k (0,ω) has the same distribution as X n k (0, ω), and thus we have we have Let Then applying Proposition 3.1, using Itô isometry, and noting that w(t) is a martingale with respect toF , we have (also see the proof of [26,Theorem 4.1], and here we use the reason why "the reminder" disappear in (4.4) on Page 175) where we used the fact that (independent increment of Brownian motion) Since the uniqueness of weak solutions implies the uniqueness of laws, we have unformly on [0, T ]. Moreover, we can replaceX(0,ω) on (Ω,F ,P ) by X(0,ω) on (Ω, F, P ) with the same law. Then the solution X(t) admits the same distribution ofX(t) by weak uniqueness of the equation (3.3). It suffices to prove that then we have By using the above equality, we get It follows from (3.5), (3.6) and (3.8) that (see the proof of [7] in Page 292 for details) that is to say, X(T ) has the same distribution as X(0). Define the function z : R + →D by where n t = max{n ∈ N|nT ≤ t}. Then z(t) is a T -periodic solution to (3.3). The proof is complete. . The reason is that in (3.9), the second last equality we use an equality, which different from [7], where they used the following inequality Noting that implies that the condition (3.5) is weaker than [7, (5)].
On the other hand, if σ = 0, the condition (3.5) becomes which is an extension assumption to Halanay [2].
Similarly, if we define In this section, we obtain some properties of PDF by considering the existence of periodic solutions to the Fokker-Planck equations. In order to establish the desired results, we divide this section into two parts.
Bounded domain with Dirichlet boundary condition
The reason why we first consider the Dirichlet boundary condition problem is the assumptions on (A, B). More precisely, in Section 2, we make the assumption that In this subsection, we assume that a stochastic process X t satisfies where b is an d-dimensional vector function, σ is an d×m matrix function and B t is an m-dimensional Brownian motion. Our aim is to study the properties of PDF by considering the existence of periodic solution to the corresponding Fokker-Planck equation. Throughout this subsection, we assume the particle (in the system) will die if it touch the boundary. That is to say, the PDF of this system satisfies the following evolution equation where In order to get the properties of p in (4.2), we need consider the following auxiliary equation in D. In order to get the existence of solution of (4.3), we first consider the following initial boundary problem where u 0 (x) is a fixed function and will be given later. It is easy to see that there exists an evolution operator U (t, s) such that the solution of (4.4) can be represented in the following form (see section 13 in [15]) that is to say, It is easy to check that v(t, x) is a solution of (4.2) with p 0 = u 0 . We assume that (C2) The operator A * is a uniform elliptic operator, Then 0 < r < 1, where r := spr(K).
Proof. For completely, we give the outline of the proof. To show that r < 1, let u 0 ∈ W 2,p 0 (D), u 0 ≫ 0 be a principal eigenfunction of K, i.e., Ku 0 = ru 0 . Then u := U (·, 0)u 0 solves If a 0 ≥ 0 inD × [0, T ] and u 0 > 0 in W 2,p 0 (D), the Propositions 13.1 and 13.3 and Remark 13.2 in and hence v ≫ 0 in W 2,p 0 (D) for each 0 < t ≤ T by the Propositions 13.1 and 13.3 and Remark 13.2 in [15]. In particular, which implies that r < 1. . Hence the solution of (4.2) satisfies exponential decay for any fixed point in D as time goes to infinity under the special initial data. That is to say, the solution p(t, x) has the following property: where µ > 0 is given as in (4.3) and p 0 (x) satisfies Kp 0 = rp 0 with K = U (T, 0).
Proof. It follows from Lemma 4.1 that the principal eigenvalue of K satisfies 0 < r < 1. Proposition 2.3 implies that is an eigenvalue of (4.3) with the positive eigenfunction Take u 0 be the principal eigenfunction of K (take p 0 (x) be the principal eigenfunction of corresponding K in equation (4.3)). And thus we have That is to say, u(t) is the solution of (4.3). The uniqueness of principal eigenvalue implies the uniqueness of solution of (4.3). By T -periodicity of A(t), we have U (t, τ ) = U (t + nT, τ + nT ), n ∈ Z. Noting that the solution of (4.2) can be written as By using the properties where we used the fact that U (T, 0)p 0 (x) = e −µT p 0 (x).
Remark 4.1
In the proof of Lemma 4.1, we know that the initial data u 0 (or P 0 ) is special function, that is, u 0 satisfies Ku 0 = ru 0 . Now, we give an example to show that this is possible. Consider the problem Assume that u 0 satisfies (r > 0) −∆u 0 = ru 0 , in D, u| ∂D = 0, and by using the fact where ∆ D denotes the Laplace operator with Dirichlet boundary, then we get (by Taylor expansion) And thus the solution of the following equation can be written as v(t, x) = e µt u(t, x) = e t(µ−r) u 0 .
If we want to get v(T, x) = v 0 (x), we will take µ = r. Because there is no concrete value for T , we obtain that the solution u satisfies u(t, x) = e −µt u 0 for all t > 0 and x ∈ D.
It follows from Theorem 4.1 that it is difficult to obtain the existence of periodic solution to linear parabolic equation. Now we turn to the nonlinear case. In 1998 Pardoux and Zhang proved in [22] a probabilistic formula for the viscosity solution of a system of semilinear PDEs with Neumann boundary condition where D is an open connected bounded subset of R d . In order to get the existence of periodic solution for Dirichlet problem, we need to consider the nonlinear parabolic equation in D.
(4.6)
The assumptions on f will be given later. We first recall some results. In [15], Hess considered the following periodic initial boundary problem where they assumed the function b does not depend on t, and They used the upper and lower solution method to prove the existence of periodic solution of (4.7). We first recall the definition of upper (lower) solution. , respectively. Let f (·, ·, u) ∈ C α/2,1+α ([0, T ] ×D) be uniformly with respect to u ∈ [σ, ω] and f (x, 0, 0) = 0 on ∂D. Fixed p > max{d, 1 + d 2 }. If there exists at least one u 0 ∈ W 2 p,B satisfying u(0, x) ≤ u 0 (x) ≤ū(0, x), then the problem (4.7) has at least one solution u ∈ C 1+α/2,2+α ([0, T ] ×D) and satisfies The proof of Proposition 4.1 is standard and we omit it here. Now, by using the Proposition 4.1, we only need find a pair of upper and lower solution to the problem (4.7). In order to do that, we need consider the periodic parabolic eigenvalue problem in D.
(4.8)
It is easy to check that if b ≥ 0 on ∂D × (0, T ], then the maximum principle holds for the problem (A, B). We will need the following Lemmas. We want to know how the principle eigenvalue λ 1 depends on the zero-order term a 0 . Because in Fokker-Planck equations, the role of drift term is reflected in the zero-order term a 0 . In order to do that, we need the following lemma.
where h ∈ F (see Section 2 for the definition of F). Let λ 1 be the principal eigenvalue λ 1 of (4.8).
Then we have (i) If λ < λ 1 , then the problem (4.9) has a unique solution u and u > 0 in F 1 ; (ii) If λ ≥ λ 1 , then the problem (4.9) has no positive solution, and no solution at all if λ = λ 1 .
By using the above Lemma 4.3, it is easy to prove the following Lemma.
Then the problem (4.6) admits a unique solution.
Proof. The existence of periodic solution is obtained by using Proposition 4.1. We only need find a pair of upper-lower solution of (4.6). Actually, from the assumptions on a 0 and f , we see thatū = M ≥ M 0 is an upper solution. Let φ be a positive eigenfunction corresponding to λ 1 (a 0 ), i.e., φ is the solution of (4.8) with λ = λ 1 (a 0 ). Take ε > 0 and set u = εφ, then u is s lower solution of (4.6). Moreover, u andū are the ordered upper and lower solutions of (4.6) if we choose ε ≪ 1 and M ≫ 1. According to Proposition 4.1, the problem (4.6) has at least one solution u satisfying εφ ≤ u ≤ M . The proof of uniqueness follows from the comparison principle.
The assumptions on f can be given weaker, but it is not our aim. See [28] for f = u(h 1 (t, x) − h 2 (t, x)u) and h i , i = 1, 2, are some functions.
Bounded domain with reflecting boundary condition
It is easy to see that there is no periodic solution to equation (4.2). Now, we consider another case. We assume that a stochastic process X t satisfies where It follows the results of [8] that for the existence of probability density function, the necessary condition is that operator A * is a uniform elliptic operator. Throughout this section we assume that the operator A * is a uniform elliptic operator. We first consider a special case. It is noted that most of work on the periodic parabolic problem the authors assumed the boundary function b does not depend on the time t. Because under this assumption, one can apply the standard theory of evolution equation of "parabolic type", see [1,28]. So we assume Then the problem (4.11) becomes in D. (4.12) We can use the similar method to deal with the problem (4.12).
It is well known that the upper-lower method is not suitable to the linear parabolic equation. The reason is that if we find an upper solution φ for a linear parabolic equation, then λφ will be an upper solution for any λ > 0. Hence we can not obtain the existence of non-negative solution for this linear parabolic equation. And in this subsection, we only consider the one dimensional case because it can be calculated clearly. We want to obtain the existence of periodic solution of (4.11). For simplicity, we denote a(t, x) = (σσ T )(t, x) and D = (0, 1). Due to the operator A * is a uniform elliptic operator, we have a(t, x) > 0 for (t, x) ∈ [0, T ] × [0, 1]. The one dimensional problem will be written as on ∂D × (0, T ], p(0, x) = p(T, x), in D.
(4.13)
We first assume that Then we get which implies that p t = 0, i.e., That is to say, the stochastic process has stationary probability measure. Summing the above discussion, we have For d ≥ 2, we can not calculate it clearly. But we guess there exists a positive periodic solution to problem (4.11). Indeed, it follows from (4.11) that (4.16) The existence of periodic solution to (4.11) is equivalent to getting p(0, x) = p(T, x) point by point for x ∈ D from (4.16).
In 2000, Lieberman [17,18,19] did a series of work about the periodic solution of parabolic equation on bounded domain. Especially in [19], Lieberman obtained the existence of periodic of the following parabolic equation If b, σ satisfy the conditions of [19, Lemma 2.1], then the (4.11) will admit a periodic solution p.
Whole space
In this subsection, we consider the existence of periodic solutions of Fokker-Planck equations in the whole space. For a stochastic process X t satisfies equation (4.1), the corresponding Fokker-Planck equation is the following form Furthermore, if the probability density p(t, x) satisfies p(t + T, x) = p(t, x), ∀(t, x), then p(t, x) is call a T -periodic solution of (4.17). By using the method of [13], we will obtain the existence of periodic solution of (4.17). In [13], the author considered the following periodicity problem is T -periodic function with respect to the time variable t, the period T > 0 is arbitrary chosen and fixed. They got the following result.
Comparing the problem (4.17) and (4.18), we see that the problem (4.17) is a linear problem and problem (4.18) will contain problem (4.17) if (σσ T ) ij = constant. Firstly, it is remarked that when d = 1, the results in subsection 3.2 also holds for problem (4.17). That is to say, theorem 4.3 holds for (4.17). In order to get the existence of solutions to (4.17) for d ≥ 2, we suppose that a(t) is continuous positive T -periodic function, which is defined on the whole real axis R. We denote [a] = 1 Fix 0 < Q < ∞, 0 < ε < 1 and denote If we assume sup t,x |b i (t, x)| < ∞, sup t,x |(σσ T ) ij (t, x)| < ∞, i, j = 1, 2, · · · , d, we have that 0 ≤ F < ∞. Note that a(t) is a continuous positive T -periodic function, which is defined on the whole real axis R, we conclude that 0 ≤ G < ∞. From here and 0 < Q < ∞ we get 0 ≤ S < ∞.
The main result of this subsection is the following theorem.
Theorem 4.4 Assume that the SDEs (4.1) admits a probability density function p(t, x) satisfying (4.17). Assume further that F < ∞ and b i , σ are T -periodic functions, then there exists a periodic solution p(t, x) ∈ C 1 (R + , C 2 (R d )) satisfying (4.17) with p(t + T, x) = p(t, x) for t ≥ 0 and x ∈ R d .
We choose the constants A i , i = 1, 2, · · · , d, so that A i > 0 satisfying Such choice is suitable if 0 < A i < 1, i = 1, 2, · · · , d, is small enough. We set A = (A 1 , A 2 , · · · , A d ) and We first prove that the periodicity problem (4.20) We will use the fixed point arguments to prove the existence of solution to (4.20). Following the idea of [13], we give the following lemma.
then p(t, x) is a solution to the problem (4.20). We use the following symbols Proof. Differentiating the (4.21) twice in x 1 , twice in x 2 , · · · , twice in x d and using the periodicity of a, p, b and σ, we can obtain the desired result. See [13, Lemma 2.1] for more details.
In order to get the fixed point of L 1 , we define In the set D 1 andD 1 , we define a norm as follows: Then D 1 ,D 1 and C 1 ([0, T ], C 2 (B 1 )) are completely normed spaces with respect to this norm, see Appendix of [13]. We rewrite the operator L 1 in the following form where M 1 (p) = (1 + ε)p, To obtain the operator L 1 has a fixed point in the space C 1 ([0, T ], C 2 (B 1 )) we need the following lemma.
Lemma 4.6 [29, Corollary 2.4, p.3231] Let X be a nonempty closed convex subset of a Banach space Y . Suppose that T and S map X into Y such that (i) S is continuous, S(X) resides in a compact subset of Y ; (ii) T : X → Y is expansive and onto. Then there exists a point x * ∈ X with Sx * + T x * = x * .
We recall the definition of expansive operator.
Definition 4.2 [29] Let (X, d) be a metric space and M be a subset of X. The mapping T : M → X is said to be expansive, if there exists a constant h > 1 such that It is easy to check the following lemma, see [13, lemma 2.3] for the details.
Lemma 4.7 The operator M 1 : D 1 →D 1 is an expansive operator and onto.
Next we prove the operator N 1 satisfies the (i) of Lemma 4.6.
Lemma 4.8 The operator N 1 : D 1 → D 1 is continuous and D 1 is a compact set inD 1 .
Proof. We first prove N 1 maps D 1 to D 1 . For any p ∈ D 1 , by using (4.19), we have For (N 1 (p)) t , we get which implies that (using (4.19) again) Let k ∈ {1, 2, · · · , d} be arbitrary chosen and fixed. Then we have (σσ T ) ij (t + s,ẑ ij )p(t + s,ẑ ij )dẑ ij dŷ k + d i,j=1,i =j,j=k Ā x k Ā y ik (σσ T ) ik (t + s,ẑ ik )p(t + s,ẑ ik )dẑ ik dŷ k + d i,j=1,i =j,i=k Ā x k Ā y kj (σσ T ) kj (t + s,ẑ kj )p(t + s,ẑ kj )dẑ kj dŷ k Using (4.19), we obtain |(N 1 (p)) x k | ≤ εQ + (A 1 · · · A k−1 A k+1 · · · A d ) 2 A k Q Consequently, N 1 : D 1 → D 1 . It follows from the above estimates that if p n → p in sense of the topology of the set D 1 we have N 1 (p n ) → N 1 (p) in sense of the topology of the set D 1 . Therefore the operator N 1 : D 1 → D 1 is a continuous operator. It follows from the definitions of D 1 andD 1 that D 1 is a compact set in the spaceD 1 . | 2018-12-28T02:29:18.000Z | 2018-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "aeef7c87881777a7cd64b1e1c572d09eca0c87cd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1804.07895",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aeef7c87881777a7cd64b1e1c572d09eca0c87cd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
89527166 | pes2o/s2orc | v3-fos-license | Genetic diversity analysis of Zinc, Iron, Grain Protein content and yield components in Rice
The genetic diversity among 85 land races were evaluated for ten characteristics by using Mahalanobis D 2 Statistics. The genotypes were grouped into six clusters based on relative magnitude of D 2 values. Cluster-I was the largest comprising of forty three landraces,and cluster-V and -VI were monogenotypic clusters. Maximum inter-cluster distance was exhibited between clusters-II and -VI. The five prominent characters, total number of filled grains per panicle, grain zinc concentration, days to fifty per cent flowering, plant height and protein content, mainly contributed towards total divergence. Cluster-V has highest mean value for panicle length and number of tillers per plant.Cluster-III hashighest grain yield per plant and cluster-VI has highergrain iron concentration and protein percentage.The maximum contribution towards genetic divergence was observed in number of grains per panicle followed by grain zinc concentration. Thus, this study evaluated the diverse landraces useful in selection of parents during hybridization.
Introduction
Rice (Oryza sativa L.) is one of the agronomically and nutritionally important cereal crops.It is a major source of food for more than 2.7 billion people on a daily basis and is planted on about onetenth of the earth's arable land.It is the single largest source of food energy to half of the humanity in the world, most of them in developing countries.Rice is the major food crop in India occupying nearly 43.97 million hectares with an annual production of 104.32 million tonnes and productivity of 2372 kg ha -1 (India stat, 2012-2013).In the last two decades, new research findings generated by the nutritionists revealed the importance of micronutrients, vitamins and proteins in maintaining good health, adequate growth and even acceptable levels of cognitive ability apart from the problem of protein energy malnutrition.
Quantification of the degree of divergence in a given experimental materials is vital for the identification of divergent genotypes for further use in hybridization to create new variability.Mahalanobis D 2 statistic has been used as a powerful tool for quantifying genetic divergence in a given population.The Divergent genotypes could be obtained either by collectionfrom different ecogeographical regions or it could be induced by combination breeding..The main objective of this research work is the evaluation of genetic diversity among the land races of ricefor yield and yield contributing characteristics to identify the parents for hybridization program.
Materials and Methods
The experimental material consisted of eighty five landraces of rice obtained from the plant breeding division, crop improvement section, Indian Institute of Rice Research (formally, Directorate of Rice Research), Rajendranagar, Hyderabad.Experiment was carried out during Kharif2014-2015 at Directorate of Rice Research Farm, ICRISAT campus, Patancheru, Hyderabad, India.All the genotypes were sown separately on raised beds in the nursery on 3 rd July, 2014.Twenty five days old seedlings of each genotype were transplanted in 2 rows of 3 m length by adopting a spacing of 20 cm x 15 cm in a Randomized Block Design replicated twice.Recommended agronomic and plant protection measures for raising a healthy nursery and main crop were taken up during the experiment.Five plants of each genotype in each replication selected randomly were used to record data.Ten characteristics, which include seven quantitative characters and three nutritional traits, were recorded.Grain iron and zinc concentrations were determined by X-Ray Fluorescence Spectrometry.Protein/Nitrogen content was estimated by Combustion Method using protein analyzer.
Results and Discussion
The significance of 85 land races of rice in the analysis of variance of dispersion clearly indicated the significant pooled effect of all the characters studied between different genotypes.Hence, further analysis was made to estimate D 2 analysis.
All landraces of rice under study were distributed into six clusters based on D 2 values using Tocher's method (Rao, 1952) such that the genotypes belonging to different clusters.The distribution of genotypes into various clusters is presented in Table 1 and Figure 1.Out of six clusters, cluster I was the largest comprising of forty three landraces, followed by cluster IV with sixteen, cluster III with fourteen, cluster II with ten and cluster V and cluster VI were monogenotypic clusters.It indicates the existence of high degree of heterogeneity among the genotypes.
It is evident from the clustering pattern shownin Table .1 and Figure.1 that the landraces originating from similar geographical regions were classified into different clusters and it also indicates that geographical diversity and genetic diversity were not related due to differences in adaption, selection criteria, selection pressure and environmental conditions.
The average intra and inter cluster D 2 values and statistical distance among 85 genotypes are presented in Table 2. Intra cluster D 2 values ranged from zero (cluster V, VI) to 39.77 (cluster IV).Maximum intra cluster distance was observed in cluster IV (39.77), followed by cluster III (38.98), cluster II (35.36), cluster and cluster I (32.93), indicating that some genetic divergence still existed among the genotypes.Selection within such clusters might be executed based on maximum mean value for the desirable characters.This could be made use of in the yield improvement through recombination breeding.
From the inter cluster D 2 values of the six clusters, the inter cluster distance was higher than intra cluster distance indicating the presence of wide genetic diversity among the landraces under study.it can be seen that the highest divergence occurred between cluster II and VI (100.30)followed by cluster IV and VI (78.61), cluster I and VI (77.77), cluster III and IV (77.45), cluster II and V (73.74), cluster III and VI (68.76), cluster IV and V (68.55) suggesting that the crosses involving varieties from these clusters would give wider and desirable recombination in order to get high heterotic recombinants.While the lowest was noticed between cluster I and II (43.83), followed by cluster I and III (50.79), cluster III and V (52.33), cluster II and IV (54.34), cluster V and VI (59.93) cluster I and II (59.10).
The cluster means for each of ten characters are presented in Table 3.It can be observed from the data shown in table that the considerable differences existed for all the characters under study.The data indicated that the cluster mean for days to 50 per cent flowering was highest in cluster VI (101.00) and the lowest in cluster V (41.00).Plant height was highest in cluster V (140.10 cm) and lowest in cluster II (91.39cm).Cluster V recorded the highest panicle length (26.50) and the lowest number in cluster II (19.34).Cluster V and cluster VI recorded the highest number of tillers per plant (12.70cm) and the lowest was recorded in cluster II (9.46 cm).The number of grains per panicle was highest in cluster V (211.00) and the lowest in cluster IV (70.56).Highest 1000 grain weight was recorded in cluster I (23.63 g) and the lowest in cluster VI (19.60g).Cluster III recorded the highest grain yield per plant (25.58 g) while in cluster II it was low (14.17 g).Cluster II recorded low Grain iron concentration (9.00) whereas it was high in cluster VI (11.95).High grain zinc concentration was recorded in cluster VI (25.35) while it was low in cluster II (6.81).cluster VI recorded high grain protein content (9.90)where it was lowest in cluster II (6.81).Therefore, this result indicates that selection of landraces having high values for particular trait could be made and used in the hybridization programme for improvement of that character.
The cluster V is having highest mean value for panicle length, number of tillers per plant and 1000 grain weight and plant height.Cluster III has higher grain yield per plant, and the grain iron concentration and protein percentage is higher in cluster VI.Thus,the clusters having high mean values observed in this study may be directly used for adaptation or may be used as parents in future hybridization programme.
The results showed in Table 4 indicates that the contribution of No. of grains per panicle was highest towards genetic divergence (42.91%) by taking 1532 times ranking first, followed by grain zinc concentration (24.71%) by 882 times, Days to 50% flowering (13.00%) by 464 times, grain protein content (8.99%) by 321 times, plant height (7.62%) by 272 times, grain iron concentration (2.04%) by 73times, 1000 grain weight (0.53%) by 19 times, grain yield per plant (0.20%) by 7 times, number of tillers per plant and panicle length each by zero times, respectively to the genetic divergence in decreasing order.Similar result were conformity with Ramya and Kumar (2008) for number of filled grains per panicle, number of productive tillers per plant and grain yield per plant; Banumathyet al. (2010) for grain yield, days to 50 per cent flowering, total grains per panicle and plant height; Padmajaet al. (2010) who reported major contribution to diversity through total number of grains/panicle; Sandhya et al. (2014) characters like number of number of spikelets per panicle, biological yield per plant, test weight, harvest index and days to 50 percent flowering contributed maximum towards genetic diversity.
Table 2 :
Intra (diagonal) and inter-cluster average of D 2 values of 85 landraces of rice
Table 3 :
Cluster means for 10 characters in 85 landraces of rice (cluster analysis).
Table 4 :
Relative contribution of different characters to genetic diversity in 85 landraces of rice. | 2019-04-01T13:16:25.779Z | 2016-09-24T00:00:00.000 | {
"year": 2016,
"sha1": "281c26a8ea9b681046bd98e0a6bc6f6f6e60877e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5958/0975-928x.2016.00045.4",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ee270d3b0eb0d287e87df56703faad1506d8c4db",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119082742 | pes2o/s2orc | v3-fos-license | Numerical study on Anderson transitions in three-dimensional disordered systems in random magnetic fields
The Anderson transitions in a random magnetic field in three dimensions are investigated numerically. The critical behavior near the transition point is analyzed in detail by means of the transfer matrix method with high accuracy for systems both with and without an additional random scalar potential. We find the critical exponent $\nu$ for the localization length to be $1.45 \pm 0.09$ with a strong random scalar potential. Without it, the exponent is smaller but increases with the system sizes and extrapolates to the above value within the error bars. These results support the conventional classification of universality classes due to symmetry. Fractal dimensionality of the wave function at the critical point is also estimated by the equation-of-motion method.
I. INTRODUCTION
Since the pioneering work by Anderson 1 , the metal-insulator transition driven by disorder, which is called the Anderson transition(AT), has attracted much attention for many years 2-5 . The critical behavior of the AT is conventionally classified, depending on the symmetry of hamiltonians, into three universality classes: the orthogonal, the unitary and the symplectic classes 6 . Systems invariant under spin rotation as well as time reversal form the orthogonal class. The unitary class is characterized by the absence of the time reversal symmetry. Systems invariant under time reversal but having no spin rotation symmetry belong to the symplectic class.
In the last decade, there has been considerable progress in the numerical study of the AT in three dimensions(3D) by the finite-size scaling analysis for quasi-1D systems 7 . In the early stage, it was not easy to confirm numerically for the 3D orthogonal class that the critical exponent is insensitive to the choice of the probability distribution of random potential 8 . This discrepancy in exponents for different distributions of random potential has been removed by improving the accuracy of numerical calculations 9 and by taking into account the corrections to scaling 10 . With such a high-accuracy analysis, it has been concluded that the critical exponent for the orthogonal system can be distinguished from that for the unitary system 11 . These recent developments confirm the universality of critical exponents as well as the validity of the conventional classification of universality classes in AT. It should be noted, however, that in most cases, such analyses have been restricted to the AT near the band center in the presence of a random scalar potential, where the scaling analysis works fairly well. In contrast, for the AT away from the band center, no systematic scaling behavior has been observed 8,12 .
The AT in a magnetic field has been studied extensively, mainly in connection with the quantum Hall effect 13,14 . Accordingly, in most cases, the magnetic field was assumed to be uniform in space and the disorder was introduced by a random scalar potential. On the other hand, in recent years, there has also been considerable interest in the transport properties of a system subject to a spatially random magnetic field. The random magnetic field introduces randomness as well as the absence of invariance under time reversal in a system. In fact, it has been shown 15 that in 3D the AT occurs in the presence of the random magnetic field and without a random scalar potential.
The AT in a random magnetic field is driven by the coherent scattering due to a fluctuating vector potential. A nontrivial feature of this coherent scattering by a fluctuating vector potential has been pointed out 16 in a theory of strongly correlated spin systems. Much work has also been done on transport properties in 2D in a random magnetic field 17 , in particular in connection with the theory of the fractional quantum Hall effect 18 in a high magnetic field. It is thus an important issue to understand how the effect of coherent scattering in a strongly fluctuating random vector potential will show up in the AT.
The magnetic field breaks the time reversal symmetry and thus all systems in the magnetic field should belong to the unitary class. In fact, it has been demonstrated numerically in 3D 20,22 that in the presence of a random scalar potential, the critical exponent takes a universal value, irrespective of whether the magnetic field is uniform or random. The AT with a random potential and in a uniform magnetic field has been re-analyzed recently and the critical exponent for the localization length has been determined to be 1.43 ± 0.06 11 .
The AT, in 3D, in the presence of a random vector potential and without a random scalar potential, has also been investigated based on the finite-size scaling. The data suggested 15 that the mobility edge is very close to the band edge. The exponent for the localization length has been estimated to be ν ≈ 1 15 which is considerably smaller than that in the case with an additional random scalar potential and in a uniform magnetic field. This seemed to indicate that in 3D the AT driven solely by a random vector potential might exhibit critical behavior different from that observed in other unitary systems, for example systems having additional random scalar potential. Apparently, this questions the validity of the conventional classification of universality classes in AT. On the other hand, it should be recalled that the finite-size scaling analysis did not work for the AT near the effective band edge 8,12 . It is thus important to re-examine the applicability of the scaling ansatz to the AT driven solely by the random magnetic field in which the mobility edge lies quite close to the band edge.
In this paper, we report on a high-precision numerical finite-size scaling analysis for the AT in the random magnetic field. In order to clarify the origin of the above mentioned discrepancy between the critical exponent of the AT far away from the band center induced solely by randomness in a vector potential and the exponent obtained for other unitary systems, we have considered systems both with and without an additional random potential. We also evaluate the fractal dimension of the wave functions at the critical point based on the equation-of-motion method.
The paper is organized as follows. In the next section, the hamiltonian which we adopt is introduced. The finite-size scaling study on the critical phenomena is presented in section 3. In section 4, the fractal dimensionality of the wave function is discussed by means of the equation-of-motion method. Section 5 is devoted to summary and discussion.
II. MODEL
The model is defined by the Hamiltonian 15 where C † i (C i ) denotes the creation(annihilation) operator of an electron at the site i of a 3D cubic lattice. Energies {ε i } denote the random scalar potential distributed independently and uniformly in the range [−W/2, W/2]. The Peierls phase factors exp(iθ i,j ) describe a random vector potential or magnetic field. We confine ourselves to phases {θ i,j } which are distributed independently and uniformly in [−π, π]. The hopping amplitude t is assumed to be the energy unit, V = 1. The phases {θ i,j } are related to the magnetic flux, for example, as where φ i and φ 0 = hc/|e| denote the magnetic flux through the plaquette (i, i +x, i +x +ŷ, i +ŷ) and the unit flux, respectively. Herex(ŷ) stands for the unit vector in the x(y)-direction. Note that in the present system, the condition that the magnetic flux through a closed surface is zero is satisfied.
III. FINITE-SIZE SCALING STUDY
We consider quasi-1D systems with cross section M × M 7,9 . The Schrödinger equation Hψ = Eψ in such a bar-shaped system can be rewritten using transfer matrices T n (2M 2 × 2M 2 ) (n = 1, 2, . . .) where ψ n and H n denote the set of coefficients of the state ψ and the Hamiltonian of the n−th slice, respectively. The identity matrix is denoted by I. The off-diagonal parts of the transfer matrix T n can be expressed by the identity matrix because the phases in the transfer-direction can be removed by a gauge transformation 15 . The logarithms of the eigenvalues of the limiting matrix T are called the Lyapunov exponents. The smallest Lyapunov exponent λ M along the bar is estimated by a technique which uses the product of these transfer matrices 5,7 . The relative accuracies for the smallest Lyapunov exponents achieved here is 0.2% for M ≤ 10 and 0.25% ∼ 0.3% for M = 12. The localization length ξ M along the bar is given by the inverse of the smallest Lyapunov exponent, ξ M = 1/λ M .
The assumption of one-parameter scaling for the renormalized localization length By fitting our data to the above function, we can determine the critical exponent ν and the mobility edge accurately.
In practice, we truncated the series (6) at the third order(n = 3) and used the standard χ 2 -fitting procedure 23 . The error bars are estimated by using the Hessian matrix and the confidence interval is chosen to be 95.4%. For the the transition at the band center in the presence of a strong random scalar potential, a clear scaling has been observed for presently achievable sizes, 6 ≤ M ≤ 12. In fact, all the data (84 points) for M = 6, 8, 10, and 12 in the range 17.8 ≤ W ≤ 19.8 can be successfully fitted by the fitting function (6) up to the 3rd order, which has six fitting parameters including the critical point and the critical exponent. We have estimated the critical disorder and the exponent ν to be W c = 18.80 ± 0.04 and ν = 1.45 ± 0.09 24 . The renormalized localization length Λ c at the critical point is 0.558 ± 0.003. The error bars of these estimations are at least a factor of 3 smaller than those of the previous estimates 20 .
In contrast, in the absence of the random scalar potential (W = 0) or in the presence of an additional weak random scalar potential (W = 1), for which the critical point lies near the band edge, we have found 24 that the correction to scaling is not negligible. Near the band edge, the density of states changes rapidly as a function of energy. We have thus performed high-accuracy transfer matrix calculations for narrower energy range |E − E c | ≤ 0.025 around the critical point for W = 0 and W = 1 24 . In both cases(W = 0 and W = 1), we have found that the estimation of the critical exponent tends to increase with the system-sizes. In order to extrapolate the critical exponent for W = 0, we have made calculations for larger system sizes M = 14 and M = 16. Here we show, in Table I Table 1, we can see that the exponent ν tends to increase with the system-sizes and is likely to saturate around ν ∼ 1.48. Within the error bars, estimated values of ν for M ≥ 12 are consistent with 1.45 ± 0.09 obtained for the band center as well as 1.43 ± 0.06 estimated in the uniform magnetic field 11 . No evidence has been found for ν ≈ 1 which was suggested by calculations with low accuracy 15 . The present results support the universality of the critical exponent in the unitary systems. The positions of the critical points and the values of Λ c estimated with different combinations of system-sizes are fluctuating for M ≥ 12 (Table I). The value of Λ c = 0.558 ± 0.003 at the band center seems to lie inside the range of this fluctuation. Conventionally, the value of Λ c is also expected to be universal in unitary systems. Our results obtained here seem to be consistent with this universality of Λ c .
The mobility edge trajectory in the presence of the random magnetic field is shown in figure 2. Each critical point (mobility edge) is estimated based on numerical data by the transfer matrix method with M = 6 ∼ 10. It should be noted that there exist extended states for energies larger than the critical energy E c ≈ 4.41 for W = 0. This type of reentrant phenomena in the energy-disorder plane has been commonly observed for systems with the uniform distribution of random scalar potential 26,27 . It is interpreted 26 that the enhancement of extended states for a weak additional random scalar potential is due to the enhancement of density of states at that energy regime.
IV. EQUATION-OF-MOTION METHOD
We now turn our attention to the properties of wave function just at the AT in random magnetic fields. It is well known that at the AT, the wave function shows multifractal structure 28 which leads to the scale invariant behavior of conductance distributions 29,30,11 and the energy level statistics. [31][32][33][34][35][36][37][38] The direct way to investigate the wave functions is to diagonalize the Hamiltonian. This, however, is numerically very intensive. Instead, we calculate here the time evolution of wave packets to extract the information of fractal dimension. We first prepare the initial wave packet |0 close to AT by diagonalizing a small cluster located at the center of the system. The time evolution of the state at time t is then obtained by |t + ∆t = U (∆t)|t where U (∆t) is the time evolution operator. In order to perform effectively the numerical calculation, we approximate U (∆t) by the products of exponential operators as with p = (2 − 2 1/3 ) −1 and where H 1 , · · · , H q are decomposition of the original Hamiltonian H = i H i which are simple enough to diagonalize analytically. 39 The square displacement of the wave packets is defined by r 2 (t) = t|r 2 |t .
In metallic phase, r 2 (t) is proportional to Dt where D is the diffusion coefficient. In the insulating phase, it saturates to the square of localization length, ξ 2 . At AT, the anomalous diffusion 41,42 is expected. The fractal dimension D 2 is estimated from the autocorrelation function where C(t) is expected to decay as 43 In Fig. 3, we show the results of C(t) for the transition at the center of the band in the presence of a strong random scalar potential(W = 18.8V ). By diagonalizing a small cluster of 7 × 7 × 7 located at the the center of the system, we follow the time evolution of wave packets in 101 × 101 × 101 systems. Geometric average of C(t) over 10 random field and potential configurations are performed. By fitting the data for t > 40h/V , the fractal dimensionality D 2 is estimated to be D 2 = 1.52 ± 0.18 considerably smaller than the space dimension d = 3. The above value is consistent with the estimate of 3D system at AT in a strong uniform magnetic field. 42,44 V. DISCUSSIONS In summary, we have investigated in detail the AT in a random magnetic field based on the transfer matrix method with considerably high accuracy. In particular, whether or not the AT driven solely by the random vector potential (W = 0) exhibits different critical behavior from other unitary systems has been discussed. In order to clarify the above point, we have performed the scaling analysis for the three critical points, namely E = 0, W = 0, and W = 1 (figure 2). For the transition at the band center (E = 0) in the presence of a strong additional random potential, a clear scaling behavior has been observed and the exponent ν has been estimated to be 1.45 ± 0.09. This coincides with the value obtained for a unitary system in a uniform magnetic field 11 . It has been found, on the other hand, that the correction to scaling is not negligible in the presently achievable sizes for the transitions near the band edge(W = 1 and W = 0). The exponents estimated for W = 0 by larger system sizes are consistent with those obtained for other unitary systems within the error bars. From the size dependence of ν, in contrast to the suggestion in ref. 15, no evidence has been found for ν ≈ 1. These results indicate the universality of ν in the unitary class and hence support the conventional classification of the AT by universality classes due to symmetry.
The mobility edge trajectory has been also obtained in the presence of the random magnetic field. It's qualitative shape turns out to be similar to those obtained for other systems with the uniform distribution of random scalar potential.
We have also studied the diffusion of electrons at the AT in the presence of a random magnetic field. By solving the time-dependent Schrödinger equation numerically, we examine the time evolution of wave packets at the AT. From the asymptotic behavior of the autocorrelation function, we have extracted the fractal dimensionality of the critical wave function at the band center. | 2019-04-14T02:19:39.943Z | 1999-07-21T00:00:00.000 | {
"year": 1999,
"sha1": "030c8c1bf6a678b3af47dfa42c9aefd12c8b29c5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9907319",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0c3f547d6551ca5df320e228e5acf7ec021c8fb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253185420 | pes2o/s2orc | v3-fos-license | Impact of sarcopenia on the prognosis and treatment of lung cancer: an umbrella review
Background Lung cancer is the leading cause of cancer-related mortality worldwide. Sarcopenia, defined as the loss of muscle mass and function, is known to cause adverse health outcomes. The purpose of this umbrella review was to integrate published systematic reviews and meta-analyses exploring sarcopenia and lung cancer to provide comprehensive knowledge on their relationship. Methods Eligible studies were searched from scientific databases until June 28, 2022. Critical appraisal was performed using A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2. The impact of sarcopenia on the pathophysiology, prevalence, and prognosis of lung cancer is summarized at the level of systematic reviews or meta-analyses. Results Fourteen reviews and meta-analyses were conducted. The methodological quality was high for one review, low for nine, and critically low for four. The most common standard for diagnosing sarcopenia in the lung cancer population is computed tomography (CT) to measure the skeletal muscle index at the third lumbar vertebra (L3). Sarcopenia was highly prevalent among patients with lung cancer, with a pooled prevalence ranging from 42.8% to 45.0%. The association between sarcopenia and increased postoperative complications and decreased disease control rates with immune checkpoint inhibitors has been demonstrated. Mortality was significantly higher in sarcopenic patients than in non-sarcopenic patients with lung cancer, regardless of the stage of disease or type of treatment. Conclusions Sarcopenia is a poor prognostic factor for lung cancer. Future studies are necessary to clarify the pathophysiology of sarcopenia and develop effective interventions for sarcopenia in patients with lung cancer. Supplementary Information The online version contains supplementary material available at 10.1007/s12672-022-00576-0.
Introduction
Lung cancer is a common and unfavorable type of malignancy [1]. Its incidence is on the rise globally, with more than two million estimated new cases per year [1]. The age-standardized cumulative lifetime risk is 3.80% for men and 1.77% for women, making it the second most prevalent cancer in both sexes [2,3]. Surgical excision, chemotherapy, and radiotherapy have been the traditional cornerstones in the treatment, followed by targeted therapy and immunotherapy. The improved survival in industrialized countries is attributed to decline in tobacco smoking, early detection via lowdose chest tomography, and easy access to the state-of-the-art treatment modalities [3]. Despite substantial efforts and advances, the latest 5-year survival rate (from 2010 to 2016) for lung cancer in the United States of America is 20.5% [4].
Tumor/node/metastasis (TNM) staging based on tumor size, local invasion, and distant spread is the prevailing framework for estimating life expectancy in the cancer population [5]. However, the utility of the TNM system is limited in advanced cancer and in patients receiving targeted therapy and immunotherapy. The functional status represented by the Eastern Cooperative Oncology Group (ECOG) Performance Status Scale is of independent prognostic value in lung cancer. However, its clinical value is limited by its subjective assessment [6]. Weight loss at the initial diagnosis was independently associated with poor outcomes in patients with non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC) [7]. Further, patients with NSCLC and weight loss are less responsive to chemotherapy and have an increased withdrawal rate [8]. Therefore, numerous ongoing studies aim to identify more reliable prognostic indicators other than weight loss.
Sarcopenia is a skeletal muscle disorder characterized by progressive generalized loss of muscle mass and function [9,10]. In case of low muscle strength, sarcopenia can be confirmed by measuring the muscle quantity and quality.
Although it was first introduced as a geriatric disease, the condition is not exclusive to older adults and can accompany many diseases. Its associations with cardiac disease, respiratory disease, cognitive impairment, and musculoskeletal disorders have also been observed [11,12]. Sarcopenia is a pressing clinical issue because it poses increased risks for falls, fractures, functional impairment, hospitalizations, and mortality, and creates hefty healthcare burdens [11,13]. Accordingly, there has been great interest in the impact of sarcopenia on lung cancer, with several systematic reviews and meta-analyses published to explore the relationship between them. This umbrella review aimed to compile evidence from these systematic reviews and meta-analyses to evaluate the existing information on the interplay between sarcopenia and lung cancer.
Quality assessment
Two authors (T.-Y. L. and W.-T.W.) separately performed the critical appraisal of included reviews using A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, and a consensus was reached after discussion [15]. AMSTAR2, a methodological quality evaluation tool, comprises 16 items, seven of which are of particular importance, i.e., the presence of a precedent protocol, comprehensive literature search, written inclusion/exclusion criteria, risk of bias assessment, appropriate statistical method, sufficient data interpretation, and publication bias consideration. After scoring yes, partial yes, or no for each item, the overall confidence of the systematic review or meta-analysis was graded as high, moderate, low, or critically low.
Data synthesis
The results of this umbrella review are presented at the systematic review or meta-analysis level. We addressed the similarities and differences in the population, criteria for sarcopenia, and relevant outcomes to gain a complete understanding of the association between sarcopenia and lung cancer. Details of the studies included in each of the eligible reviews are outlined in the Additional file 1.
Literature search
Of the 1797 records generated from the original database search, 1764 were removed for being duplicates or nonrelevant literature after title and abstract screening. Full texts were screened for the remaining 33 articles; 15 articles were excluded wherein patients with lung cancer were not regarded as a subgroup during analysis and four for describing nutritional status without focusing on sarcopenia. Finally, 14 reviews [16][17][18][19][20][21][22][23][24][25][26][27][28][29] fulfilled all eligibility criteria and were included in our umbrella review (Fig. 1).
Diagnosis and prevalence of sarcopenia in lung cancer
In a pioneer review by Collins et al. [16], sarcopenia was defined by a handful of measuring techniques, including dualenergy x-ray absorptiometry (DEXA), bioelectrical impedance analysis (BIA), computed tomography (CT), upper arm dimensions, grip strength, and skinfold thickness. More recently, CT has become the dominant tool for confirming the diagnosis (Table 3). Skeletal muscle index (SMI, cm 2 /m 2 ) is calculated by dividing the cumulative skeletal muscle area (SMA, cm 2 ) on a transverse CT slice by the square of the patient's height. Psoas muscle index (PMI) is calculated using the following formula: total psoas muscle area (cm 2 ) / height (m 2 ). Skeletal muscle density (HU) is another indicator of body composition on CT images, reflecting the intramuscular adipose tissue infiltration or muscle quality. The systematic review by McGovern et al. [26] revealed that the most popular diagnostic thresholds for sarcopenia were derived from large-population studies by Prado et al. [30] (n = 250) and Martin et al. [31] (n = 1473).
Pathophysiology and treatment of sarcopenia in lung cancer
Systematic reviews by Collins et al. [16] and Nishimura et al. [19] reported concurrent loss of body weight and muscle mass in patients with lung cancer. No significant difference in preoperative serum albumin was noted between the sarcopenic and non-sarcopenic groups [19]. Besides, carcinoembryonic antigen was associated with preoperative sarcopenia in a few, but not all studies, according to the review by Nishimura et al. [19]. Herein, there was no robust evidence on how changes in protein metabolism or genetic polymorphism contributed to the development of muscle mass loss in the lung cancer population [16]. Among the epidemiological variables, Kawaguchi et al. [24] reported a significant association between sarcopenia and smoking habits in three [41][42][43] out of five studies [33,[41][42][43][44]. Nishumura et al. [19] observed that aging was significantly associated with sarcopenia in half of the recruited studies while there was no significant association between sarcopenia and pathologic staging of lung cancer.
Regarding epidemiological variables, Kawaguchi et al. [24] discovered three [41][42][43] out of five [33,[41][42][43][44] studies reporting a significant association between sarcopenia and smoking habits. Nishumura et al. [19]'s review revealed that aging was significantly associated with sarcopenia in half of the selected studies, while there was no significant association between sarcopenia and the pathologic staging of lung cancer.
The performance status was not related to preoperative sarcopenia in the review by Nishumura et al. [19]. In contrast, Collins et al. [16] reported that patients with cachexia had a reduced walking distance and quadriceps strength.
There are incoherent findings about the association between forced expiratory volume and preoperative sarcopenia [19]. The effect of nutritional supplements (such as fish oil, protein supplement, and adenosine-5'-triphosphate infusion) on slowing/reversing muscle loss or on improving survival in patients with lung cancer has been contradictory [16].
The impact of sarcopenia on the prognosis of lung cancer
The prognostic value of sarcopenia in patients with lung cancer was fundamental in the majority of the included reviews. Four reviews [17,20,22,29] incorporated diverse treatment options, such as surgery, chemotherapy, immunotherapy, radiotherapy, or palliative care. Three reviews [18,19,24] emphasized on the postoperative outcomes and four [21,23,25,27] focused on immunotherapy.
Postoperative complication rate
The postoperative complication rate was increased in patients with sarcopenia with an odds ratio (OR) of 2.51 (95% CI: 1.55-4.08) in the meta-analysis by Nishumura et al. [19] (involving NSCLC, SCLC, and metastatic disease to the lung) and 1.86 (95% CI: 1.42-2.44) in that of Kawaguchi et al. [24] (targeting NSCLC). Additionally, two reviews [19,24] reported that sarcopenic patients were more likely to withstand major complications according to a single study on 328 patients with NSCLC (16.1% vs. 7.1%, p = 0.036) [42]. Lower the SMI/PMI threshold for diagnosing sarcopenia, higher was the risk of enduring postoperative complications in NSCLC [24].
Overall response rate
The overall response rate refers to the percentage of patients whose tumors disappear (complete response) or decrease in size (partial response) after treatment. The disease control rate describes the proportion of patients with decreased or stable disease burden during the study period [45]. The endpoints in patients with NSCLC receiving immunotherapy were condensed in two meta-analyses [21,27]; they revealed a significantly worse disease control rate in sarcopenic versus non-sarcopenic participants. Pre-treatment sarcopenia and deteriorating sarcopenic status after initiating therapies were linked to a decreased disease control rate (risk ratio [RR]: 0.62, 95% CI: 0.19-1.53) [21]. However, although sarcopenia showed an unfavorable overall response rate (RR: 0.54, 95% CI: 0.19-0.53), the difference between sarcopenic and non-sarcopenic patients was not statistically significant. Interestingly, a pooled result from three studies [38,46,47] suggested that sarcopenia did not increase the rate of immune-related adverse events (RR: 0.99, 95% CI: 0.21-4.67) such as dermatitis, colitis, pneumonitis, or endocrinopathies [21].
Progression-free survival
Progression-free survival implies the time before the detection of disease progression or patient's death [48]. The duration without tumor relapse after treatment is represented by disease-free survival [49]. Pre-treatment sarcopenia was significantly related to shortened progression-free survival rates in patients with lung cancer receiving immunotherapy in the meta-analyses by Wang et al. [21], Deng et al. [23], Lee et al. [25] and Takenaka et al. [27]. The association of sarcopenia with disease-free survival varied among different patient populations. Deng et al. [18] and Yang et al. [20] did not acknowledge a significant difference in the postoperative disease-free survival between sarcopenic and non-sarcopenic patients with NSCLC [18,20]. However, Kawaguchi et al. [24] reported that patients with NSCLC and sarcopenia had reduced disease-free survival after lung resections (OR: 1.66, 95% CI: 1.00-2.74). Poorer disease-free survival was also observed in sarcopenic patients with advanced NSCLC on immune checkpoint inhibitors (ICIs) with a hazard ratio (HR) of 1.98 (95% CI: 1.32-2.97) [21].
Overall survival
Patients with lung cancer and concomitant sarcopenia had worse overall survival than non-sarcopenic patients as demonstrated repeatedly in our umbrella review. Across various meta-analyses, the pooled HR and RR of mortality for sarcopenic patients ranged between 1.27-4. 68 Regarding cancer treatment, sarcopenia was significantly associated with poor overall survival among either operated [18,19,24] or immunotherapy-managed [21,23,27] patients with NSCLC. Wang et al. [21] showed that sarcopenia was an independent unfavorable prognostic factor for patients with NSCLC on ICIs with an HR of 1.61(95% CI: 1.24-2.10) and it indicated higher mortality for the subgroup using nivolumab (HR: 2.10, 95% CI: 1.22-3.61). Buentzel et al. [17] and Yang et al. [20] reported that the cancer stage did not affect the predictability of sarcopenia for mortality. Deng et al. [18] noted that this was especially true for patients with stage I disease. In their meta-analysis, sarcopenia led to significantly poorer overall survival in patients with stage I NSCLC (RR: 2.09, 95% CI: 1.51-2.88). However, the correlation was not significant when studies recruiting NSCLC patients of all stages were analyzed (RR: 1.37, 95% CI: 0.78-2.42) [18]. For every one unit fall in SMA and SMI or for a one-degree decrease in the phase angle by BIA during the treatment for lung cancer, a 4% increase in mortality was observed [17]. Wang et al. [21] also observed that the presence of muscle loss under immunotherapy was predictive for poor overall survival (HR: 4.97, 95% CI: 2.39-10.32). Nonetheless, there were inconsistent findings regarding the median overall survival [20]. Sarcopenic patients had significantly poorer median overall survival than non-sarcopenic patients in SCLC (8.6 vs. 16.8 months, p = 0.031) [50], stage I NSCLC (32 vs. 112 months, p < 0.01) [51] and stage IV NSCLC (12.6 vs. 23.5 months, p = 0.035) [52] cohorts. However, the difference was not significant in stage IIIB-IV NSCLC (7.5 vs. 7.9 months, p = 0.490) [32] (Table 4).
Discussion
According to this umbrella review, sarcopenia was prevalent among patients with lung cancer and served as an unfavorable prognostic factor. Similarly, sarcopenia was significantly associated with higher postoperative complications, lower disease control rates in patients using ICIs, and poorer overall survival. However, it does not increase the risk of immune-related side effects in patients receiving ICIs for lung cancer. The predictive value of sarcopenia for increased mortality remained unchanged across patients with different tumor types or those using distinct anti-cancer therapies. The findings of this umbrella review are summarized in Fig. 2.
We highlighted the pervasiveness of muscle depletion in lung cancer, with an overall prevalence of sarcopenia ranging from 42.8 to 45.0%. More patients with advanced disease were sarcopenic than those with early stage lung cancer. Sarcopenia is a part of the multifactorial cachexia syndrome. Patients suffering from cachexia experience profound body weight loss, primarily from the wasting of skeletal muscle and adipose tissue, anemia, and extra-cellular fluid imbalance [53]. The prevalence of cachexia ranges from 36 to 61% in NSCLC [54][55][56]. Anorexia, accelerated resting energy expenditure, increased lipolysis, and depression of protein synthesis coupled with rising protein degradation play a role in the development of cachexia [57]. Herein, although cachexic patients are known to be sarcopenic, the majority of sarcopenic people may not be cachexic [58]. Changes in the intertwined epigenic, cellular, and hormonal pathways of skeletal muscle metabolism that induce sarcopenia are not yet fully understood [11]. Immobility and insufficient calorie intake are the primary driving causes [59]. In patients with lung cancer, malignancy related pain, fatigue, and depression could lead to disuse atrophy. The side effects of antineoplastic therapy, such as nausea, vomiting, and altered taste exacerbate malnutrition. Reduced muscle strength hinders ambulatory ability, creating a disabling vicious cycle.
In our umbrella review, we noticed that various criteria had been employed to define sarcopenia in patients with lung cancer. The muscle mass at the third lumbar vertebra level (L3) upon CT imaging was the mostly used standard because it closely reflected the whole body fat-free mass [60]. Instead of the L3 landmark, some researchers calculated the mass at the thoracic muscle because it is related to the respiratory muscle condition [19]. Nishumura et al. [19] emphasized that the vertebral level of measurement did not interfere with the predictive value of sarcopenia for postoperative complications.
A handful of techniques can be used to determine body composition. Although DEXA and BIA are cost-effective, their estimations can be altered by the individual's hydration status (which is often abnormal in the ill) along with the inconsistencies across different instrument brands and reference populations [11]. In contrast, CT can provide detailed imaging of specific tissues. Moreover, the examination is routinely performed throughout the cancer workup and follow-up. However, CT only measures the muscle quantity. It is unclear whether the diagnosis of sarcopenia, without assessing muscle strength, affects the predictive value. Additionally, there was considerable heterogeneity in the cutoff values, among which the L3 SMI thresholds proposed by Prado et al. [30] (men: < 52.4 cm 2 /m 2 ; women < 38.5 cm 2 /m 2 ) and Martin et al. [31] (men: SMI < 43 cm 2 /m 2 for those with body mass index [BMI] < 25 kg/m 2 , SMI < 53 cm 2 /m 2 for those with BMI ≥ 25 kg/ m 2 ; women: SMI < 41 cm 2 /m 2 ) were the most widely adopted. Kawaguchi et al. [24] suggested that L3 PMIs < 6.36 cm 2 / m 2 for men, < 3.92 cm 2 /m 2 for women and < 3.70 cm 2 /m 2 for men, < 2.50 cm 2 /m 2 for women were optimal for predicting survival and postoperative complications, respectively. Further studies are needed to establish the most suitable cutoff values of lean body mass for the association of various prognostic parameters in patients with lung cancer.
The impact of sarcopenia on the prognosis of lung cancer
Sarcopenia is a strong predictor of increased postoperative complications. Prior studies have delineated the deteriorating influence of sarcopenia on invasive procedures, such as hip fracture surgery, emergent abdominal surgery, and gastrectomy for cancer [61][62][63]. Adequate nutrition and tissue perfusion are the basis for wound healing. However, sarcopenia is associated with anemia; therefore, it impedes tissue regeneration [64]. Respiratory muscles of sarcopenic patients are weakened by hypercatabolic state and increased levels of pro-inflammatory cytokines such as interleukin (IL)-6, tumor necrosis factor (TNF)-α, and transforming growth factor (TGF)-β [65]. The ensuing difficulty of weaning from ventilator support could predispose patients to further deconditioning, pulmonary infections, longer intensive care unit stay, and ultimately death. The risk of acute respiratory failure and 30-day mortality were significantly higher in sarcopenic patients with lung cancer after pneumonectomy [39]. Current evidence suggests that patients with NSCLC and sarcopenia have inferior responsiveness to immunotherapy and progression-free survival. The goal of immunotherapy is to enhance immune surveillance, such as deploying T cells to eradicate cancer cells [66]. Muscles regulate the immune response by signaling soluble myokines, cell surface molecules, and cell-to-cell interactions [67]. Wasting of skeletal muscles is likely to disrupt the equilibrium of muscle-immune systems and impair immune cell production. Furthermore, T cells become functionally incompetent in patients with cancers due to this miscommunication between skeletal muscles and lymphoid organs [68]. The "exhausted" T cells may in turn compromise the efficacy of immunotherapy [68]. The action of immunotherapy in patients with lung cancers may also be modulated by the gut and lung microbiome (gut-lung axis) [69]. Malnutrition, chronic infections, and antibiotic overuse presumably distort the intrinsic gut ecosystem, leading to a subsequent pro-inflammatory status and sarcopenia [69].
There are inconsistent results regarding the association between sarcopenia and disease-free survival in patients with lung cancer. This may be due to the limited number of original studies conducted in the early years. In the metaanalyses by Deng et al. [18] and Yang et al. [20], disease-free survival was computed from the same three studies [33,50,51]. Although both reviews noted a trend towards poor disease-free survival for sarcopenic patients, neither of them revealed a statistically significant difference. Later, Kawaguchi et al. [24] demonstrated shortened disease-free survival for sarcopenic patients with surgically treated NSCLC based on six studies [33,43,50,51,70,71]. Nevertheless, our umbrella review showed that meta-analyses on the direct impact of sarcopenia on cancer recurrence, distant metastasis, and toxicity of chemotherapy and radiotherapy were lacking.
Sarcopenia predicted poor overall survival in patients with lung cancer; similarly, sarcopenia had a negative impact on the survival of patients with operated NSCLC. Although Deng et al. [18] reported that the predictive value was more robust for stage I patients, merely one study [72] analyzing stage I-IV patients reported no significant impact of sarcopenia on the overall survival. The prognosis was also inferior in sarcopenic patients with NSCLC receiving immunotherapy. Notably, there are limited data on the survival outcomes of patients receiving chemotherapy. The mechanism by which loss of muscle mass shortens lung cancer survival can be interpreted in several ways. First, sarcopenia on its own is related to increased all-cause mortality regardless of age and sex [73]. Second, performance status, which has recently Fig. 2 Summary of the findings of this umbrella review been included in the recent diagnostic criteria of the Asian Working Group for Sarcopenia, is recognized as a prognostic factor for lung cancer [74]. Deteriorated physiological reserve, a hallmark of frailty and sarcopenia, lowers the patient's tolerance to aggressive therapeutic approaches, resulting in substandard dosing or premature treatment termination. Studies have shown that sarcopenic cancer patients had poor compliance during chemotherapy [75]. Lastly, hampered treatment response and added complication risks in sarcopenic patients, as also shown in our review, have adverse effects on cancer prognosis.
Our umbrella review has some limitations. First, it was inherently subject to biases in the included systematic reviews and meta-analyses. Complex interactions among skeletal muscles, inflammation, and the immune system are elusive; thus, there is a knowledge gap between the mechanism and treatment of sarcopenia in patients with lung cancer. Further research is needed to clarify the influence of sarcopenia on metastasis, recurrence, treatment response/toxicity, and quality of life in patients with lung cancer. Likewise, future studies verifying the predictive power of sarcopenia for various clinical outcomes in different subtypes and stages of lung cancer are also needed.
Conclusions
Sarcopenia is a major health threat in lung cancer, affecting up to half of all patients. Its diagnosis in this population should not be underestimated because of its association with elevated postoperative complications, decreased immunotherapy response rates, and increased mortality. In patients with sarcopenia and lung cancer, survival is adversely affected regardless of the cancer type (NSCLC/SCLC), stage, or treatment option. Therefore, sarcopenia is a robust prognostic factor for therapeutic responses and outcomes in patients with lung cancer. Further research is needed regarding the pathophysiology and interventions in the lung cancer population.
Data availability Not applicable.
Code availability Not applicable.
Competing interests
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-10-29T13:43:23.801Z | 2022-10-28T00:00:00.000 | {
"year": 2022,
"sha1": "9d04079269d29b129c330d30ff27f1a6cbcbafa9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "d35606dfce309bf1cbddae0c88da46be63d78261",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8183426 | pes2o/s2orc | v3-fos-license | A comment on the paper"On the orbit of the LARES satellite", by I. Ciufolini
In this note we comment on a recent paper by I.Ciufolini about the possibility of placing the proposed terrestrial satellite LARES in a low-altitude, nearly polar orbit in order to measure the general relativistic Lense-Thirring effect with its node. Ciufolini claims that, for a departure of 4 deg in the satellite's inclination $i$ from the ideal polar configuration (i=90 deg), the impact of the errors in the even zonal harmonics of the geopotential, modelled with EIGEN-GRACE02S, would be nearly zero allowing for a few-percent measurement of the Lense-Thirring effect. Instead, we find that, with the same Earth gravity model and for the same values of the inclination, the upper bound of the systematic error due to the mismodelling in the even zonals amounts to 64% of the relativistic effect investigated.
1 The polar configuration for measuring the Lense-Thirring effect The possibility of measuring the general relativistic gravitomagnetic Lense-Thirring effect by means of the node of a LAGEOS-like satellite placed in a relatively low-altitude (a ∼ 8000 km), polar (i ∼ 90 deg) orbit−POLARES in the following−was proposed for the first time by Lucchesi and Paolozzi (2001) and, subsequently, criticized by Iorio (2002). The benefits of such an idea mainly rely in the possibility of using a relatively cheap launcher vehicle and in the fact that, for a perfectly polar configuration (i = 90 deg), all the classical secular precessions induced on the node by the even (ℓ = 2, 4, 6...) zonal (m = 0) harmonic coefficients J ℓ of the Newtonian multipolar expansion of the terrestrial gravitational potential, proportional to cos i, vanish. The main drawbacks of such an orbital configuration are as follows • The satellite's node is perturbed, among other things, by the ℓ = 2, m = 1 constituent of the Solar K 1 tide whose period is equal to that of the spacecraft's node itself: for i ∼ 90 deg it precesses very slowly, so that K 1 would mimic an aliasing secular trend over an observational time span of a few years compromising the recovery of the genuine relativistic linear trend of interest. This general feature of motion of a polar satellite was already recognized by Peterson (1997) and Iorio (2005a) in the framework of the GP-B mission.
• This problem is avoided by choosing an inclination a few deg apart from the ideal polar configuration. But, in this case, the systematic error δµ induced by the mismodelled part of all the even zonal harmonics, emphasized by the low altitude of the satellite, is enhanced, depending on the accuracy of the gravity model used. Iorio (2002) used the full covariance matrix of EGM96 (Lemoine et al. 1998) up to degree ℓ = 20 showing that for orbits just 1 deg apart from i = 90 deg the impact of the mismodelled even zonals considered amounted to 40%. Such an estimate is likely optimistic because of the use of the correlations among the solved-for even zonals.
In (Iorio 2002) the possibility of using POLARES in conjunction with the existing LAGEOS and LAGEOS II satellites according to the well-known linear combination approach was investigated as well: it turned out to be unfeasible because of the quite large value of the coefficient with which the POLARES node would enter such combinations. Iorio (2005b) extensively studied the impact of the new Earth gravity models by CHAMP and GRACE on the possibility of using a new satellite to measure the Lense-Thirring effect. Among other things, the POLARES configuration (a = 8000 km and e = 0.04) was re-analyzed with the EIGEN-CG01C model (Reigber et al., 2006), up to degree ℓ = 20 and, much more conservatively than in (Iorio 2002), by linearly summing up the absolute values of the individual mismodelled classical precessions; the situation is now improved with respect to the EGM96 case, but it turned out that for a shift of just 2 deg in i with respect to the ideal polar geometry the bias due to the mismodelling in all the uncancelled even zonal harmonics still amounts to about 25%. In Iorio (2005b) also the linear combination scenario with LAGEOS and LAGEOS II was investigated showing that for a = 8000 km, e = 0.04, and 60 deg< i <80 deg the systematic error due to the mismodelled even zonals is 1-3%.
The departures from the ideal polar configuration
The subject seemed, thus, to have exhaustively been treated so far, when a new paper on it by Ciufolini (2006) appeared. Basically, the only novelty of such work, which mainly reproduces the content of Section 4.2.1 and Section 4.4 of Iorio (2005b) without quoting it, is a huge underestimation of the impact of the uncertainties in our knowledge of the geopotential on a certain orbital configuration of POLARES. Indeed, Ciufolini (2006), who used the EIGEN-GRACE02S Earth gravity model (Reigber et al., 2005), after discussing the problem of the K 1 tide, proposed to circumvent it by adopting for POLARES an orbital configuration with a = 7878 km and Ω .ℓ δJ ℓ (1) by using the calibrated errors in J ℓ (Reigber et al. 2005). The coefficientṡ Ω .ℓ of the classical node precessions were explicitly worked out up to degree ℓ = 20 in (Iorio 2003): for, e.g., ℓ = 2 we havė where R denotes the Earth's mean equatorial radius and n = GM/a 3 is the Keplerian mean motion. Ciufolini (2006) did not explain how he assessed the error due to the even zonals (root-sum-square calculation? Sum of the absolute values of the individual errors?), apart from claiming that he used the analytical expressions of the nodal precession of a satellite, up to ℓ = 10, from an unspecified reference R. Tauraso (2004). For a = 7878 km, e = 0.04 and i = 86/94 deg we get 1 δµ ≤ 64%.
In Table 1 we release the details of our calculation. As can be noted, the uncancelled precession due to δJ 2 amounts to 70% of the entire error.
In addition to the static part of the geopotential, also its time-dependent components must also be considered. In particular, for i = 90 ± 4 deg, the mismodelled part of the ℓ = 2 m = 0 constituent of the 18.6-year tide would have a serious aliasing impact on a sought few-percent measurement, especially over an observational time span of just 3 years, as proposed by Ciufolini (2006). The uncancelled secular variations of the even zonalsJ 2 ,J 4 ,J 6 would be another source of systematic error.
Thus, it seems to us very difficult to agree with the conclusion by Ciufolini (2006) "A nearly polar orbit for LARES at an altitude of about 1500 km would be suitable for a measurement of the Lense-Thirring effect with accuracy of a few percent.". A new satellite can be fruitfully used only in conjunction with LAGEOS and LAGEOS II. Such existing satellites, however, would set the total realistic accuracy obtainable to a few percent level because of the impact of the non-gravitational forces acting on them, independently of how well they could be reduced on LARES. Indeed, it is not clear if and how LAGEOS and LAGEOS II could benefit from the reduction of the non-gravitational forces on LARES. Such interesting technological and engineering efforts (Bellettini et al. 2006) could likely turn out to be really and fully useful if the launch of at least two entirely new spacecraft was implemented (Iorio 2005b). Table 1: Individual mismodelled node precessions δΩ (J ℓ ) ≡ Ω .ℓ δJ ℓ induced by the calibrated errors in J ℓ , ℓ = 2, 4, 6...40, in milliarcseconds per year (mas yr −1 ), according to the variance matrix of EIGEN-GRACE02S Earth gravity model (Reigber et al. 2005) for a = 7878 km, e = 0.04, i = 86 deg. The mismodelled precessions for ℓ ≥ 30 are smaller than 0.1 mas yr −1 . The Lense-Thirring effect for such an orbital configuration amounts to 116.6 mas yr −1 . The upper bound of the total error δµ ≤ 40 ℓ=2 δΩ (J ℓ ) is quoted, in mas yr −1 , in the last row: it amounts to 64% of the Lense-Thirring effect. The most important contribution comes from J 2 whose mismodelled precession amounts to 70% of the total error. | 2014-10-01T00:00:00.000Z | 2006-09-21T00:00:00.000 | {
"year": 2006,
"sha1": "ea977f11b225a866c02afd563c4d90ae7e3783f0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/0609097",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ed45ce8c7a46b31b5748ab5bd64081ba98edbb6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222353313 | pes2o/s2orc | v3-fos-license | Pupillary response to representations of light in paintings
It is known that, although the level of light is the primary determinant of pupil size, cognitive factors can also affect pupil diameter. It has been demonstrated that photographs of the sun produce pupil constriction independently of their luminance and other low-level features, suggesting that high-level visual processing may also modulate pupil response. Here, we measure pupil response to artistic paintings of the sun, moon, or containing a uniform lighting, that, being mediated by the artist's interpretation of reality and his technical rendering, require an even higher level of interpretation compared with photographs. We also study how chromatic content and spatial layout affect the results by presenting grey-scale and inverted versions of each painting. Finally, we assess directly with a categorization test how subjective image interpretation affects pupil response. We find that paintings with the sun elicit a smaller pupil size than paintings with the moon, or paintings containing no visible light source. The effect produced by sun paintings is reduced by disrupting contextual information, such as by removing color or manipulating the relations between paintings features that make more difficult to identify the source of light. Finally, and more importantly, pupil diameter changes according to observers’ interpretation of the scene represented in the same stimulus. In conclusion, results show that the subcortical pupillary response to light is modulated by subjective interpretation of luminous objects, suggesting the involvement of cortical systems in charge of cognitive processes, such as attention, object recognition, familiarity, memory, and imagination.
Introduction
The pupil is the central opening of the iris that regulates the intensity of light entering the eye to adjust retinal illumination and optimize vision (Loewenfeld, 1993). Light increments produce pupillary constriction (miosis), whereas light decrements produce pupillary dilation (mydriasis). This is known as pupillary light reflex (PLR), which is controlled by the autonomic nervous system (Gamlin & Clarke, 1995;Loewenfeld, 1993). Currently, a consistent body of evidence demonstrates that the PLR is not merely a basic low-level mechanism, showing that, even if the intensity of light is the primary determinant of the pupil size, non-visual factors can also affect the pupil diameter.
Particularly relevant to the present study are findings showing that the pupil does not constrict only in response to the physical luminance of a stimulus, but also in response to its perceived luminance. For example, Lang and Endestad (2012) found that optical illusions that induce a subjective impression of brightness (Kitaoka lightness illusion) elicit pupillary constriction, compared with control stimuli (Kanizsa form illusion), despite the actual luminance was controlled. Later, Laeng and Sulutvedt (2014) continued the research toward an increasingly abstract level of stimuli, showing that mentally visualizing a bright scene, compared with a darker scene, produces pupillary constriction. Recently, Suzuki et al. (2019) found that colorful glare illusions (especially blue), that subjectively enhance the perception of brightness, induce pupillary constriction, reflecting an adaptive response of the visual system to a probable dangerous situation of dazzling sunlight. Furthermore, Binda et al. (2013b) found that pictures of the sun induce pupillary constriction compared with control stimuli of matched luminance, as photographs of the moon, showing that high-level interpretations of image content can modulate the pupil response. Naber and Nakayama (2013) also investigated the pupillary responses to a variety of natural scenes with the same low-level features, demonstrating a larger amplitude of pupil constriction to scenes containing a sun. By showing inverted images, they also investigated the effect of contextual information on the pupil, demonstrating how visual complexity affects pupil size. Taken together, these findings confirm that pupillary responses to ambient light reflect the interpretation of the light in the scene and not simply the amount of physical light energy entering the eye.
All of these studies indicate that the pupil diameter is sensitive to top-down modulation, and consequently that the pupil diameter could be modulated by cortical pathways other than the subcortical PLR system (Becket Ebitz & Moore, 2019;Binda & Murray, 2015a). A recent experiment (Sperandio et al., 2018) demonstrated that these extra-retinal modulations require visual awareness to modulate the pupil size. Using the continuous flash suppression (CFS) technique, they found that when participants were aware of sun pictures their pupils constricted relative to the control stimuli. This did not happen when the pictures were successfully suppressed from awareness, demonstrating that pupil size is sensitive to the contents of consciousness.
In the present study, we measured the pupil response to artistic paintings representing scenes with a visible sun, a visible moon, or the presence of diffused light to address the effect of cognitive interpretation of very complex stimuli. In fact, paintings render a scene through the artist's mind, requiring an even higher level of interpretation compared with photographs or artificial stimuli (Altschul, Jensen, & Terrace, 2017;Tatler & Melcher, 2007). In addition to the effect of image content, we also investigated the effect of contextual information, such as color and global layout. We aim to confirm that the pupil size depends on complex features of the visual stimulus that are presumably processed in the cortical areas.
The present study comprises one main and two control experiments to investigate effects of paintings categories, contextual information, and subjective interpretation ( Figure 1).
Effects of paintings' categories
Three categories of paintings were used. Paintings of the sun were used to investigate if pictorial representations of high-luminance objects may elicit a smaller pupil size than other subjects, independently on the luminance of the images. Paintings of the moon were used to investigate pupillary response to stimuli representing a luminous disc as well but cognitively associated with a dark scene. Paintings with diffused light or different light sources (e.g. fires, volcanoes, etc.) were used to investigate if the mere presence of light in the absence of a luminous disc has any effect on pupil diameter.
To ensure that the results have general meaning, for each category we have purposely chosen stimuli painted over a period of more than 300 years and pertaining to very different styles, and we think this represents a strong point of the study.
In the main experiment (experiment 1), all the stimuli were presented by making them appear over a background of higher luminance. If the response depended only on overall light level, the same pupillary dilation would be expected for all stimuli. On the other hand, presentation of images depicting luminous objects is expected to produce pupil constriction due Copyright permission from the Author was obtained for the painting shown, Rural sunrise (Gercken, 2012).
to high level visual processing (Binda et al., 2013b;Sperandio et al., 2018), over-riding the effect due to the physical properties of the stimulus. We expect to find a smaller pupil size for stimuli containing a light source, particularly the sun, due to the high-level interpretation of paintings content.
In a second control experiment (experiment 2), to rule out possible effects of luminance on the results of the main experiment, stimuli were presented by making them appear over a grey background of matching luminance. In this condition, there is no discrepancy between the luminance of the screen during fixation and the stimulus, therefore, any deviation from baseline pupil size would be due to stimulus content only. As in the main experiment, we expect a smaller pupil size for stimuli with luminous light sources.
Because studies have shown that pupillary responses are more sensitive to luminance changes in the fovea (Clarke, Zhang, & Gamlin, 2003), a third control experiment (experiment 3) was done by repeating the same paradigm of the main experiment, except stimuli were presented in the periphery of the visual field. We expect to confirm the results of the main experiment, ruling thus out as a possible dependence of pupillary response on retinal eccentricity.
Contextual cues, such as relative position of objects and their orientation, are undoubtedly important for fast image interpretation (Oliva & Torralba, 2006). Disrupting these cues can have an effect in pupillary response to images, as already shown by Naber and Nakayama (2013) with computer rendering of natural images. These variables were investigated within experiment 1 and experiment 2, by comparing pupil responses to original paintings (up-right and full-color) with their inverted (180 degree rotated) and no-color (grey-scale) versions.
Effects of subjective interpretation
It is well known that aesthetic experience is unique to each individual (Kuchinke, Trapp, Jacobs, & Leder, 2009;Marković, 2010;Marković, 2011;Marković, 2012;Marković & Radonjić, 2008). It has also been shown that individual mental imagery (Laeng & Sulutvedt, 2014;Mathot et al., 2017) and the content of consciousness (Sperandio et al., 2018) affect pupillary reactions. This means that the content represented in our paintings may be differently interpreted by each participant and, as a consequence, affect pupil diameter. For these reasons, we tested whether the paintings chosen as our stimuli elicited different pupil responses in experiment 1 depending on how the observer interpreted the scene, based on their response to a categorization test.
Participants
Twenty-eight observers (18 women and 10 men, mean age = 27.2, SD = 5) participated in experiment 1, another 12 observers (5 women and 7 men, mean age = 26.4, SD = 4) participated in experiment 2, and another 12 observers (6 women and 6 men, mean age = 26.5, SD = 4) participated in experiment 3. Before starting the experiments, all participants filled out a questionnaire about personal data, presence of aberration or optical defects, history of brain damage, medication intake, tobacco consumption, and caffeine intake. All selected participants had a normal or corrected-to-normal vision (by contact lenses) and did not take any type of medication. Participants were asked to abstain from drinking coffee before the experiment and not to wear eye make-up. Observers were unaware of the aim of the experiment and gave written informed consent before the experiment. All experimental procedures were approved by the local ethics committee (Comitato Etico Pediatrico Regionale -Azienda Ospedaliero-Universitaria Meyer -Firenze FI) and were compliant with the Declaration of Helsinki.
Apparatus and set-up
Each participant was tested individually in a dark room, with no lighting other than the display screen. Stimuli were presented on an ASUS monitor (51 × 29 cm, resolution 1920 × 1080 pixels), through a dedicated computer (iMac Retina 5K, 27-inch, mid 2015 3.3 GHz Intel Core i5 processor, MacOs Sierra software version 10.12.6). The observer was positioned at 57 cm distance from the monitor with a chin rest used to stabilize the head. Pupil diameter was binocularly tracked at 60 Hz with a CRS LiveTrack FM system (Cambridge Research Systems). Stimulus presentation and data collection programs were developed using Matlab (R2016b version).
Stimuli
We selected 30 paintings of natural scenes, produced in different historical periods and with different styles (impressionism, realism, etc.). Each stimulus was nominally assigned to one of the three categories of our study, based on circumstantial elements, such as painting's title or the authors' interpretation ( Table 1; for examples of each category see Figure 2A). All images were resized (conserving proportions) to either a width or a height of 283 pixels, with the other side ranging from 178 to 355 pixels. The original luminances of all paintings, in all their versions, were modified and were rescaled to the same value, corresponding to the average luminance of the whole set (9.7 cd/m 2 ). They were also rescaled to a common resolution (28.35 pixels/cm). The luminance varied within each image, reaching its maximum at the point where the source of illumination was represented. We measured the value of luminance at the center of each lunar/solar disc represented in our images, and tested for differences between sun and moon distributions, finding no statistically significant effect (sun: M = 40.2 cd/m 2 , SD = 13.7 cd/m 2 ; moon: M = 37.5 cd/m 2 , SD = 17.3 cd/m 2 ; t(1) = 0.38, p > 0.05; Figure 3).
In addition to the 30 paintings, a set of 10 uniformgrey rectangular images were generated, matching the mean luminance (9.7 cd/m 2 ) and the average size of paintings, to be used as control stimuli for luminance.
Furthermore, a grey-scale and an inverted (180 degree rotated) version were produced for each painting (see Figure 4A). They were used in experiments 1 and 2, to assess the role of color and global image organization.
Procedure
The eye tracker was calibrated at the beginning of each session with a standard 9-point calibration routine. In experiment 1, trials started with the presentation of a black fixation cross (5 × 5 mm) in the center of a white screen (71 cd/m 2 ) for 2.5 seconds (pre-stimulus interval). This was followed by the presentation of one of the stimuli for 2 seconds (stimulus interval). The fixation cross was kept visible in the center of the screen during the pre-stimulus and stimulus intervals, whereas the luminance of the background screen was kept constant at 71 cd/m 2 . Observers were instructed to keep their gaze at the fixation cross for the whole of the pre-stimulus and stimulus intervals, refraining from blinking, and not to perform any other task. During this time, pupil size was continuously monitored by means of a camera attended by the experimenter on her own screen (using QuickTime software) throughout the whole experiment. Each trial was followed by an inter-trial interval of 2 seconds, in which a white screen (71 cd/m 2 ) was displayed. During this time, the eye tracker did not record, and the observers were allowed to blink and rest their eyes before the next trial ( Figure 1A). Experiment 1 consisted of 100 trials divided into 4 blocks of 25 images: 10 different paintings per category plus their inverted and grey-scale versions, plus 10 uniform-grey control stimuli. The sequence of stimuli presentation was randomly predetermined and kept the same for all observers.
Experiment 2 followed the same procedure of experiment 1, except that stimuli were presented on a grey background having the same luminance as the mean luminance of the stimuli (9.7 cd/m 2 ; Figure 1B). In this experiment, uniform grey control stimuli, having the same luminance as the background, were not used. This led to 90 trials in 2 blocks of 22 plus 2 blocks of 23 stimuli. Error bars on the right are SE of the means µ. Red: sun; blue: moon; green: diffused light; black: grey-uniform control stimuli. Asterisks mark statistically significant pairwise comparisons across image categories: *p < 0.05; **p < 0.01, ***p < 0.001. All data shown have been corrected based on each observer's categorization. Error bars are SE i . Locations of misinterpretations in the distribution of each image are Image 3: 82 nd percentile; image 6: 96th percentile; image 8: 57th < percentile < 93rd; image 17: 11th < percentile < 39th; image 18:11th < percentile < 46th; image 19: 21st percentile; image 23: 11th < percentile < 39rd; image 25: 18th percentile; image 27: 39th percentile; image 28: 21st percentile; image 30: 32nd percentile. (B) Correlation between the local luminance at the center of the light source of each painting and the corresponding pupillary response averaged across observers µ i (experiment 1). There is no significant correlation between pupil dilation and local luminance at the center of suns (R 2 = 0.23, F (1) = 3.83, p > 0.05) or moons (R 2 = 0.06, F (1) = 0.45, p > 0.05). The dotted lines indicate the mean luminance in the center of sun (red, M = 40.2 cd/m 2 , SD = 13.7 cd/m 2 ) and moon paintings (blue, M = 37.5 cd/m 2 , SD = 17.3 cd/m 2 ). All data shown have been corrected based on each observer's categorization.
Experiment 3 also followed the same procedure of experiment 1, but stimuli were presented in an off-center location, 5 degrees to the right of the fixation cross ( Figure 1C). In this case, grey-scale and inverted versions of paintings were not tested, leading to 40 trials divided in 2 blocks of 20 stimuli.
After the experiments, all paintings were presented again in sequence to the observers without time limitation and pupil recording, asking them to categorize each, as "sun," "moon," or "other." The complete procedure took about 50 minutes per observer, of which about 30 minutes were of pupil recordings.
Data processing
Raw data recorded by the eye-tracker were processed in the same way for all three experiments. Right and left pupil diameters were averaged, and the resulting value was transformed from pixels to millimeters. Calibration was attained by measuring the instrument's recording of a 4 mm artificial pupil, positioned at the approximate location of the subjects' left eye.
For each observer, a baseline pupil diameter was calculated by averaging pupil diameter recorded over the last 500 ms of the pre-stimulus interval in each trial. This baseline value was then subtracted from each recording of that observer over the whole 4.5 second period (Mathôt, Fabius, Van Heusden, & Van der Stigchel, 2018).
All results were classified according to the categorization made by the observer in the test, to ensure that the pupil size corresponded to the subjective interpretation of the nature of light source. For example, if a painting with a moonlit scene had been categorized as "sun" by some participants, the recordings obtained with this image were analyzed as a sun stimulus for this observer.
The analysis of the pupil responses elicited by different categories of paintings, or different versions of the same painting, follows a method widely used in literature for this type of experiments (Binda et al., 2013b;Naber & Nakayama, 2013). An average pupil size μ was calculated for each image category as follows. First, all recordings from each observer s p s,i (t), where i is the stimulus index, were averaged as a function of time p s (t) = I i p s, j (t) I (I = 10 for each category). Then, temporal averages were computed over the duration of the stimulus interval for each observer μ s = .
In addition, data were also analyzed on an image by image basis as follows. For each image i, the response for each participant s as a function of time,
Effects of paintings' categories
The main result of this work comes from the comparison of responses to the presentation of the three categories of paintings and to the uniform grey control stimuli. The time course of pupil size for each painting categoryp(t) obtained from experiment 1 is shown in Figure 2B (left). Because all images equally and greatly reduce the luminance level across the screen, if the response were based only on luminance, we would expect the same pupillary dilation for all categories. In fact, the line graph in Figure 2B (left) shows that sun stimuli elicited a much smaller dilation than all other categories, despite having the same mean luminance. Paintings with the moon, paintings with diffused light, and uniform grey control stimuli induced a consistent pupillary dilation.
Significant differences between all categories of stimuli μ are evidenced by ANOVA (F(3) = 20.54, p < 0.001). Pairwise comparisons (Table 2) show that paintings with the sun produced lower dilation than paintings with the moon, with diffused light and uniform luminance images. In addition, moon paintings produce smaller dilation than uniform grey control stimuli. No statistical difference is found between the dilation induced by diffused light paintings and moon or uniform grey control stimuli (see Figure 2B, right).
The size of differences between conditions, estimated by Cohen's d statistics, is very small for sun versus moon paintings (s = 0.55, d = 0.12), small for sun versus diffused light paintings (s = 0.55, d = 0.18), and sun versus uniform-grey (s = 0.53, d = 0.22), and very small for moon versus uniform grey (s = 0.53, d = 0.09). Values lower than 0.01 were considered to be negligible effects (Cohen, 1988;Savilowsky, 2009).
The time course p(t) also show the same general trend for all categories (see Figure 2B, left). Pupil diameter increases gradually during the pre-stimulus interval, then remains stable for about 500 ms after stimulus onset, at a common level for all categories. After this, pupil size starts to increase with different slopes according to different stimulus categories. The associated uncertainty SE(t) also increases with time for painting stimuli, while staying approximately constant for control stimuli (see the Discussion section for possible explanations). This highlights the advantage pertaining to the second method of analysis, whereby different data points are combined with proper accounting for their differing uncertainties.
Because eye movements can influence pupil changes (Gagl, Hawelka, & Hutzler, 2011), although observers were instructed to keep fixation and their eye movements were monitored, we analyzed a posteriori the average position of their eyes with respect to the fixation cross for the different stimulus categories. The average distance from fixation in millimeters was minimal (sun: 2.45 ± 0.5; moon: 2.88 ± 0.6; diffused light: 2.09 ± 0.4; and mean luminance: 2.59 ± 0.5) and the same for all categories, included the uniform grey stimuli (ANOVA, F(3) = 0.38, p > 0.05).
Results of experiment 2 are displayed in Figure 2C. In this case, the same pupillary constriction is expected for all kinds of paintings, but we found that the constriction induced by paintings of the sun is larger than those elicited by paintings of the moon and paintings with diffused light (ANOVA (F(2) = 11.88, p < 0.001; see Table 2). The size of this effect is categorized as small for sun versus moon paintings (s = 0.71, d = 0.2) and sun versus diffused light (s = 0.69, d = 0.3).
In experiment 3, where paintings are displayed in the periphery of the visual field, the time course of responses ( Figure 2D, left) suggests a lower dilation for paintings of the sun than for other categories. This is confirmed by the ANOVA analysis (F(3) =9.86, p < 0.001; see Table 2; Figure 2D, right). The size of this effect is very small for sun versus moon paintings (s = 0.51, d = 0.1), sun versus grey uniform (s = 0.49, d = 0.1), and small for sun versus diffused light (s = 0.54, d = 0.2).
Image by image analysis
Paintings are less uniform stimuli than photographs in representing a given subject. To assess the variance of the responses elicited by different paintings, data of experiment 1 have been analyzed image by image, and results are shown in Figure 3A.
The first finding is that the large majority of images were classified by observers in agreement with the nominal classification provided by the authors, but there were a small number of exceptions. They occur in 11 paintings, for a total of 20 observations, amounting to 2% of total occurrences. They are an interesting effect that we investigate further in the section Effects of subjective interpretation below, but their limited number has a small effect on the overall results, as we verified by repeating the analysis based on the nominal rather than the observers' classification.
For the cases where the paintings were perceived according to their nominal categorization, the variances in pupil responses (σ 2 SU N = 0.002, σ 2 MOON = 0.003, σ 2 DIF FU SED = 0.002, σ 2 GREY = 0.001) are compatible among all stimulus categories (Fisher's tests, p > 0.1 for all comparisons). More importantly, they are also statistically compatible with the variance of the responses observed to uniform grey control stimuli (Fisher's tests, p > 0.1 for all comparisons). This indicates that the obvious differences between individual paintings do not dominate the observed spread in response.
For the cases where the paintings were not perceived according to their nominal categorization, pupil responses were always in the direction of the average of the perceived stimulus: when sun paintings, were perceived as other, pupil sizes were larger, when moon and diffused light paintings were perceived as sun, pupil sizes were smaller. However, values, although apparently off-scale, were all comprised within 11th and the 93rd percentiles of image distributions (for all values, see the caption of Figure 3A).
All stimuli had the same mean luminance, but they depict light sources of different size and intensity. To control for dependence on these variables, measurements in experiment 1 were correlated with the luminance value in the center of the light source. Figure 3B shows no significant correlation between pupil dilation and local luminance at the center of suns (R 2 = 0.23, F (1) = 3.83, p > 0.05) or moons (R 2 = 0.06, F (1) = 0.45, p > 0.05). In addition, no statistical difference is seen between average local luminance values at the centers of the sun and moon light sources (t(1) = 0.38, p > 0.05).
Effects of contextual information
Another interesting result of experiment 1 follows from the comparison between pupillary response elicited by paintings of the sun in their original, greyscale, and inverted versions (examples in Figure 4A). The graph Figure 4B (left) shows the time course of pupil size p(t) for sun paintings, and their grey-scale and inverted versions. Average pupil responses μ are found to be different between these three conditions (ANOVA: F(2) = 28.09, p < 0.001). Grey-scale and inverted versions produce a significantly wider pupillary dilation than the original version of the sun paintings. This suggests that manipulations of image structure or color may alter the interpretation of scene brightness and, as a consequence, modulate the pupil response itself. In addition, grey-scale versions produce a larger dilation than inverted versions of the paintings. This indicates that the global arrangement of painted elements is less important than their color in suggesting the presence of light in a painting (Table 3; Figure 4B Although the same observer sees the same painting only once in the original, once in the reversed, and once in the grey-scale version, that are different for contextual information, there still may be a habituation effect on pupil size as described by Yoshimoto, Imai, Kashino, and Takeuchi (2014). A 2-way ANOVA ruled out this possibility showing a significant main effect of sun paintings' versions (ANOVA: F(2) = 28, p < 0.001) but no significant effect of order presentation (F(2) = 1.28, p > 0.05).
The same pattern of results is obtained with the same stimuli in experiment 2 (see Figure 4C, left). Original versions of sun paintings elicit more constriction than their inverted versions, that in turn elicit more constriction than grey scale versions (ANOVA: F(2) = 33.14, p < 0.001; see Table 3; Figure 4C, right). The size of these differences, assessed by Cohen's d, is small for original versus inverted versions (s = 0.70, d = 0.2) and original versus grey-scale (s = 0.70, d = 0.3), and very small for inverted versus grey-scale versions (s = 0.70, d = 0.11). ANOVA shows statistical differences also for different versions of moon (ANOVA: F(2) = 5.96, p < 0.01) and diffused light paintings (ANOVA: F(2) = 15.48, p < 0.001) . Indeed, grey-scale versions of moon paintings produce less constriction than their original versions (t(2) = 2.96, p < 0.05), and grey-scale versions of diffused light paintings produce less constriction than their original (t(2) = 5.11, p < 0.001) and inverted versions (t(2) = 4.57, p < 0.01). Therefore, in this condition, for all stimulus categories, the disruption of contextual cues alters pupillary response.
Effects of subjective interpretation
Paintings are intrinsically complex stimuli, requiring a greater interpretative effort when compared with photographs and real-life scenes, leading to cases of ambiguous interpretation by observers. This is the reason for performing our main analysis based on individual observers' response to the categorization test (see Procedure). It is, however, interesting to look in more detail to the cases of ambiguous response. Figure 3A shows, image by image, not only the average response of conformant observations, but also displays the individual responses observed in the few cases of nonconforming categorizations. Inspection of Figure 3A clearly suggests that when observers classified a nominal sun painting as "other" (therefore, they did not see any light source) their pupil got a larger pupil size than that of those that had classified the same image as sun; whereas moon and diffused light paintings elicited a smaller pupil size in observers that had classified them as "sun" stimuli.
To test for the presence of the effect of subjective image interpretation, a nonparametric, one-tailed, Mann-Whitney ranking test was performed for data of all paintings that elicited differing responses in our experiment (in cases where only one misinterpretation occurred, the p value was directly determined as the ratio of the rank of the outlier and total number of subjects). Results show a significant effect for each case tested (p < 0.05). To assess the overall significance for the presence of an effect, individual p values were combined according to the Fisher's method (Mosteller & Fisher, 1948) yielding an overall p value < 0.0001. This is a strong indication for an influence of cognitive interpretation of a visual scene on the pupillary response of the observer. Figure 5 shows, as an example, μ i for the three most ambiguous stimuli of our set, each receiving 3 of 4 misclassifications in experiment 1, reported according to the categorization received ("sun," "moon," or "other").
Discussion
We show that artistic paintings, depicting scenes illuminated by light sources of different nature, such as the sun, moon, or containing a diffused lighting, although much less realistic than photographs in representing natural scenes and largely mediated by the artist's interpretation of reality and his technique, can differently modulate the pupillary response, according to the scene represented and not to their specific luminance or other low-level visual features.
In fact, despite that all paintings had the same mean luminance, when presented on a lighter background, paintings containing a light source produced less dilation than meaningless mean grey uniform-luminance rectangles, representing the control for dilation in this condition. In particular, paintings with the sun elicited a much smaller dilation than paintings with the moon, that in turn produced a lower dilation than paintings containing no visible light source.
This pattern of results does not depend on background luminance. When paintings are presented on a mean grey background, all produce constriction, although not expected from their average luminance that is equivalent to the background. This is in agreement with previous observations of the onset of changes in contrast, besides luminance, eliciting pupillary constriction (Naber et al., 2011, Naber & Nakayama, 2013. We find that the constriction induced by paintings containing a visible sun is larger than that produced by moon and diffused light paintings. It is well known that the strength of pupillary response is larger for luminance changes occurring in the fovea (Clarke, Zhang, & Gamlin, 2003), and this raises the question of the role played by the higher values of luminance found in the vicinity of the fixation center in the case of sun and moon paintings. The fact that spatial distribution of luminance in the visual field and between image categories is not responsible for the observed differences between categories is demonstrated by three independent observations. First, when paintings are presented in the periphery, the same patterns of results are obtained: sun paintings produce less dilation than moon, diffused light and grey uniform control stimuli. This is in agreement with previous findings on photographic images (Binda et al., 2013b). Second, no correlation was found between pupil dilation and the local luminance measured at the center of suns or moons. Finally, the average luminance at the centers of sun and moon disks are compatible.
All the effects found for different stimulus categories do not depend on eye movements that have been shown to modulate pupil response (Gagl et al., 2011).
Our findings are in general agreement with those reported in the literature with non-painting stimuli (Binda et al, 2013b;Naber & Nakayama, 2013), but sun paintings produce a weaker effect compared with realistic pictures (Binda et al., 2013b). This might be the result of several factors, like differences in stimulus size and relative difference between luminance of stimuli and background. Our stimuli are also much more complex and may require higher cognitive load (Altschul et al., 2017;Tatler & Melcher, 2007), which is known to cause pupil dilation (Beatty, 1982;Hess & Polt, 1964;Just & Carpenter, 1993).
Results do not depend on the specific paintings chosen for the experiments, assigned to the three categories by the experimenters, and validated by all subjects in the categorization test. Although photograph categories chosen in similar studies comprise more or less homogeneous sets (see Binda et al., 2013b), here, paintings in the same category have been deliberately chosen to be as different as possible in style and period, to ensure the general validity of the findings. Despite this diversity, variability of responses to sun, moon, and diffused light paintings are the same, and, more importantly, they do not differ from the variability of responses to uniform grey control stimuli. This indicates that the pupil response is mainly driven by the scene depicted, overstepping differences in painting styles, artist's personal style, or his/her technique rendering of light sources.
Interesting results emerge also from the analysis of time variation of pupil size in experiments with light background. During the pre-stimulus interval there is a gradual increase of pupil diameter, possibly due to the effect of expectations (Irons, Jeon & Leber, 2017). During the first 500 ms after stimulus presentation, pupil diameter is mostly stable and equal for all the categories. This could be because the constriction that usually occurs when a stimulus appears (Naber & Nakayama, 2013;Naber et al., 2011;Privitera, Renninger, Carney, Klein, Aguilar, 2010) may be compensated by the dilation that should be produced by showing a stimulus darker than background. After this 500 ms period, pupil response starts to differ between categories. For all of them, though, there is a progressive increase of pupil size up until the end of the recording, consistent with the dilation effect due to cognitive load described in literature (Hess & Polt, 1964;Just & Carpenter, 1993;. Interestingly, the variability of observers' responses to all categories of paintings also increases with time, being larger for sun paintings, whereas remaining more or less constant for the response to the uniform-grey control stimuli. Note that this same effect was also present in pupil responses to photographs (Binda et al., 2013b) or to words conveying a sense of brightness or darkness (Mathot et al., 2017), although not analyzed or commented by the authors. We cannot be sure about the cause of this effect, but we could speculate that a number of different cognitive processes progressively set in while observers keep looking at the stimuli. This may include attention, recognition of elements in the painting, familiarity with the specific painting, aesthetic preference, memory, imagination, etc. All these factors, being different for each individual, produce a larger variability of responses than the one that could be generated by lower level perceptual visual mechanisms. This hypothesis is also in agreement with the observation that uniform-grey images, not involving such high-level processes, do not exhibit the same increase in variability.
Inverted paintings of the sun produce a larger pupil size than originals, despite sharing the same low-level features, such as luminance, contrast, chromatic contrast, and Fourier transform. This shows again that pupil amplitude is largely modulated by the observer's interpretation of the luminous objects rather than by its low-level features (Binda et al., 2013b;Naber & Nakayama, 2013). Image inversion is known to impair recognition performance of stimuli, such as pictures of faces, buildings, and cartoons (Naber & Nakayama, 2013;Scapinello & Yarmey, 1970;Strother et al., 2011;Valentine & Bruce, 1986;Van Belle, De Graef, Verfaillie, Rossion, & Lefevre, 2010;Yin, 1969). Therefore, by changing the complex relations between features of the paintings, the information about its content decreases, making it more difficult for the observer to use contextual cues to identify the source of light. A similar effect was found by Naber and Nakayama in computer generated images (Naber & Nakayama, 2013).
Grey-scale versions of sun paintings cause an even greater pupil size than originals, comparable to that produced by uniform-grey images, devoid of meaning, used as controls. Because chromatic content is a very important cue used for image interpretation (Goffaux et al., 2004, Greene & Oliva 2009Oliva & Schyns, 2000;Oliva & Torralba, 2006;Steeves et al 2004), the fact that the absence of color in sun paintings increases pupil size is further proof of pupillary response being largely driven by interpretation of the light source. The suggestion that colored stimuli may produce different pupil response than their grey-scale versions was indeed previously advanced, although not systematically investigated (Snowden et al., 2016).
The grey-scale versions of our sun stimuli also cause a larger pupil size than inverted versions, suggesting that color cues are even more important than spatial organization for the identification of the light source.
Note that the presentation of each painting in three different versions does not affect pupil responses, as expected with multiple exposures to the same stimulus (Yoshimoto et al., 2014), probably because the three versions are not perceived as repetitions of the same stimulus.
The chromatic structure of artistic compositions mostly follows the statistical features of the natural environment (Montagner et al., 2016). Therefore, blue colors are generally used in night scenes representations, whereas yellow-reddish chromaticities are used in rendering daylight scenes. Thus, different response to moon and sun paintings might be ascribed to their different chromatic contents. However, the results of this work imply that the presence of an object interpretable as a light source plays a crucial role in scene reconstruction. Indeed, diffused-light paintings endowed with the same yellow-reddish chromaticities of sun paintings, but no visible light source, produce distinguishably larger pupil size.
Perhaps the most convincing evidence presented in this work for the crucial role of image interpretation in pupillary response, is the strong relationship observed between pupil diameter of observers and their subjective interpretation of the light source. The same painting is capable of eliciting constriction in observers who see it as a sun representation and dilation in those who see it as a moon.
Other authors have tried to explain why showing images with a sun produces more constriction than images of the same luminance with different lighting structure, and we can reasonably presume that these explanations may hold also for the effects found with our paintings. A potential explanation is that the subjective perception of increased brightness reduces pupil size, as found with illusions by Laeng and Endestad (2012) and Suzuki et al., (2019) with psychophysical methods. Nevertheless, Binda et al., (2013b), by using a rating method of their stimuli, did not find this correlation. Moreover, Naber and Nakayama (2013) demonstrated that even cartoon depictions of the sun, appearing no brighter than cartoon depictions of the moon, can result in pupil constrictions. Another proposed explanation is based on different spatial distribution of attention across image categories, as it is known that attention strongly affects pupil size (Binda et al., 2013a). The observer's attention might be focusing more on the brighter regions of the sun pictures and spread more evenly in other images. However, this hypothesis has been ruled out by Binda et al., (2013b), showing that photographs of the sun cause constriction even when the observer's attention is directed to performing a different task. An explanation that still remains open after the present work is that of a protective behavior against a potentially harmful light level triggered by high-level interpretation of a very luminous object (Binda et al., 2013b;Laeng & Endestad, 2012;Naber & Nakayama, 2013;Suzuki et al., 2019). In other words, we can hypothesize that our system initiates a defense response to the powerful light induced by the sun, even if it is just depicted in a painting.
All evidences presented in this work converge with the results of previous studies in suggesting a top-down control on the pupillary light reflex (Becket Ebitz & Moore, 2019; Binda & Murray, 2015a). The neural pathways underling this high-level modulation of PLR cannot be identified with certainty, but some potentially relevant circuits have already been identified. It is well established that pupillary constriction results from the activation of the subcortical Edinger-Westphal nucleus (EW) (Gamlin & Clarke, 1995), and there are some known modulatory inputs from cortical areas to this circuit. First, EW activity is enhanced by inputs from the visual cortex (Becket Ebitz & Moore, 2017; Binda & Gamlin, 2017) and the superior colliculus (Gamlin, 2006;Joshi & Gold, 2019;Joshi, Li, Kalwani, & Gold, 2016;Wang & Munoz, 2015;Wang & Munoz, 2012). Other possible inputs to the PLR could come directly from the prefrontal cortex, in particular from the frontal eye field (FEF), or indirectly through the extrastriate cortex, the oculomotor regions in the parietal cortex and the superior colliculus that are modulated by FEF (Becket Ebitz & Moore, 2017). EW nucleus also receives inhibitory input from the sympathetic system through projections from locus coeruleus (Joshi et al., 2016;Peinkhofer et al., 2019) and the hypothalamus that are potentially under cortical control (Aston-Jones & Cohen, 2005). A reduction of this inhibitory inputs | 2020-10-15T13:05:34.355Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "80a661e414a5a44e85b63ee13d054708c8fbf72a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/jov.20.10.14",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2a2be5933d69bac8f4025b9195ecb059136d4d1",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
231621249 | pes2o/s2orc | v3-fos-license | Baricitinib for the treatment of rheumatoid arthritis
Rheumatoid arthritis (RA) is a common inflammatory disease with several implications on health, disability and economy. Conventional treatment for RA centers on anti-inflammatory drugs and specific targeting of tumor necrosis factor α (TNF-α) and interleukin 6 (IL-6). Baricitinib is a novel, Food and Drug Administration (FDA) approved, once daily oral drug that is effective in combination with current treatment and results in significantly reduced symptoms with good safety profile. Further studies are required to find rare side effects and evaluate the long term efficacy in disease modulation and patient symptom reduction. This is a comprehensive review of the literature on baricitinib for the treatment of RA. This review provides an update on the pathophysiology, diagnosis and conventional treatment of RA, then proceeds to introduce baricitinib and the data that exists to support or refute its use in RA. The presented study also indicated clinical trials confirming the effectiveness of baricitinib in this indication.
Introduction
Rheumatoid arthritis (RA) is the most common auto immune inflammatory arthritis, affecting roughly 0.5-1.0% of the population [1][2][3][4][5][6]. Rheumatoid arthritis is a chronic disease that can attack the joints of the hands, wrists, elbows, shoulders, knees, and ankles [1]. This joint damage is progressive and irreversible [1]. This autoimmune disease may cause general symptoms such as: fatigue, fever, and weakness; the inflammation can also affect other organs, particular the cardiovascular system increasing risk for myocardial infarction [1,7,8]. Rheumatoid arthritis leads to reduce functional capacity, productivity, and quality of patient's life [5,9]. It occurs roughly 2.5 times as frequently in women as men, hinting that hormones, environmental factors, and genetics may play a role in its development and can affect any age group [1,6,10,[11][12][13][14][15][16].
While many aspects of RA pathophysiology are still unknown, modern scientific advances suggest that the primary cause of RA is deregulation of JAK/STAT pathway. Janus Kinases (JAK) is a type of tyrosine kinase that alters transcription and translation processes by delivering signals from the extracellular environment to the nucleus. These kinases promote gene expression by phosphorylating appropriate sequences in the nucleus, Reumatologia 2020; 58/6 which further causes transcriptional activation triggered by signal transducers and activators of transcription (STAT) [17].
Abnormal activity of interleukin 6 (IL-6) is thought to be one of the primary causes of RA, since it plays an important role in directing T-cells to the tissues, thereby causing inflammation. Inhibition of IL-6 and of the tumor necrosis factor (TNF-α) is shown to be one of the key mechanisms dictating the design of modern RA drugs. Tumor necrosis factor significantly affects bone metabolism and renewal of bony tissues by altering osteoclast activity [18]. Further research has postulated other genetic mechanisms of RA development [19][20][21][22][23][24][25][26][27].
Baricitinib
Recently the two innovative companies introduced baricitinib on the pharmaceutical market. Company's website briefly describes the mechanism of action, stating that it functions as a JAKi and inhibits responses to cellular messenger proteins called cytokines, which are thought to be the key triggers of inflammation and swelling associated with RA [28].
According to the manufacturers websites, baricitinib is a tablet recommended once a day by adults to treat moderate to severe active RA. Due to a number of potential serious side effects, it is typically prescribed as a second choice drug after patients have tried taking TNF inhibitors, but did not feel significant relief or could not tolerate the medicine due to side effects [28]. Both brands, have warnings about potential serious side effects. Both of them also warn against using the medication in the case of a known serious infection, such as tuberculosis (TB), and therefore advise performing TB screening before starting the course of treatment. Other side effects may affect renal, cardiovascular, and eye health [20,28].
Food and Drug Administration approval
Baricitinib became Food and Drug Administration (FDA) approved in 2018 [29]. It has been approved in Europe and Japan in 2 mg and 4 mg doses [21]. It only can be indicated if previous therapies with methotrexate or other TNF inhibitors have failed. Serious side effects related to immunosuppression are concerning. Thus, Food and Drug Administration approved 2 mg dose and did not approve the 4 mg dose, with the latter possibly being more effective but causing more pronounced side effects [22]. Researchers also warn that baricitinib may lead to cancer, thrombosis, or hyperlipidemia in certain patient groups [23,30].
Mechanism of action
Baricitinib acts upon Janus kinases by inhibiting and preventing them from activating factors leading to gene expression [17,31]. These kinases generally trigger STAT pathway, thus starting the cascade of transcription initiation of effector genes [31]. This process, in turn, triggers the autoimmune and inflammatory reactions associated with main symptoms of RA [17,31]. A complete and accurate mechanism for the last step is still being studied and clarified.
It was shown that baricitinib primarily targets and inhibits JAK1 and JAK2 and has less efficacy in inhibiting TYK2 and JAK3 [31]. Typically, patients with RA will have elevated circulating B-lymphocytes, T-lymphocytes, macrophages, as well as elevated levels of the common arrays of immunoglobulins. In particular, baricitinib inhibits JAK1 and JAK2.
It was also shown to inhibit IFN-γ and IL-6 through a series of pathways. This drug is actively filtrated in the glomeruli of kidneys and consequently secreted. Interestingly, CYP3A4, which is an enzyme involved in sequestering and eliminating toxins and drugs from the body, metabolizes a very small fraction (less than 10%) of baricitinib in the body [17].
Recent study suggests that the granulocyte-macrophage colony-stimulating factor (GM-CSF) mediated cellular signals are inhibited by JAKi, which, in turn, provides significant relief of RA symptoms.
The article also gives strong evidence that IL-6 is one of the major contributors to RA-associated inflammatory response, followed by joint damage and other RArelated complications. The study further states that IL-10, IFN-α and IFN-γ play a crucial role in RA pathophysiology. However, inhibition of IL-10 may not be recommended due to the fact that overall action of IL-10 may in fact reduce inflammation [31].
Mitchell et al. [32] provide evidence that JAKi hamper the rate of chemotaxis and diapedesis of neutrophils towards IL-8, one of the important inflammatory markers in RA, thus alleviating the symptoms. However, these drugs have not been shown to increase the rate of apoptosis in neutrophil colonies taken from patients with RA, which causes them to remain active for a longer period of time. In addition, part of the inflammatory cascade generated by neutrophils also involves production of reactive oxygen species (ROS), which could not be suppressed by JAKi either.
Thus, many RA drugs, including baricitinib, are unable to fully quench the inflammatory response. It is thought that neutrophils taken from RA patients are already primed in vivo to cause inflammation and cannot be adjusted by medications [32]. Therefore, future studies can further elucidate the underlying mechanisms Reumatologia 2020; 58/6 and possible solutions to reverse the increased production of ROS in RA cases.
Baricitinib intake was associated with lower amounts of neutrophils in the blood, therefore leading to decreased inflammation. Several Interleukin types were shown to promote proliferation of lymphocytes, which can also be controlled by the inhibition of JAK3. Baricitinib treatment also produced elevated levels of low-density lipoprotein cholesterol (LDL), high-density lipoprotein cholesterol (HDL), and creatinine [33,34]. Since JAK2 plays a role in controlling hematopoiesis, its inhibition by baricitinib resulted in decreased hemoglobin levels [33].
However, some papers note that no significant changes in leukocyte count or any other drastic abnormalities were noted following the course of administration of baricitinib [35,36]. This discrepancy is possibly due to different characteristics of patient populations and needs to be explored further.
Clinical trials
Since the FDA approval of baricitinib in May 2018, a number of randomized controlled trials have demonstrated the efficacy and safety profile of baricitinib in treating rheumatoid arthritis.
In 2014 two randomized controlled trials were conducted in healthy volunteers to demonstrate the pharmacokinetics and safety of baricitinib [37]. Multiple ascending doses between 1 mg and 20 mg were studied. The results showed dose-linear and time-invariant pharmacokinetics with insignificant effects from a highfat diet. The plasma concentration of baricitinib peaks 1.5 hours after oral ingestion and mean renal clearance is 11.8 l/h. This study also demonstrated a dose related decline in absolute neutrophil count.
Phase II clinical trials
In 2015 a phase IIb trial investigated the efficacy of baricitinib at 1 mg, 2 mg, 4 mg, or 8 mg vs. placebo [38]. The study involved 301 patients from 69 institutions in 9 countries who had failed prior treatment with methotrexate. The primary endpoint was the proportion of study volunteers in the 4 mg or 8 mg cohort that received a positive result on the American College of Rheumatology 20% (ACR20) score at 12 weeks. The ACR20 is a tool that uses multiple measures to objectively evaluate improvement in rheumatoid arthritis symptoms. This trial showed a significant response to treatment in the combined 4 mg or 8 mg baricitinib cohort as compared to placebo (76% vs. 41% respectively) at 12 weeks of therapy.
Additionally, baricitinib was well tolerated by most of the study participants, and all baricitinib groups showed an increased response compared to placebo in the other secondary endpoints, including ACR50, ACR70, and remission.
At 12 weeks the baricitinib and placebo groups experienced similar proportions of treatment-emergent adverse events. The mean neutrophil count declined in all baricitinib cohorts, but there was no significant decline in mean lymphocyte count as compared to the placebo. Mean low-density lipoproteins and mean high-density lipoproteins also increased in all the baricitinib cohorts in comparison to placebo. This study shows the effectiveness of baricitinib treatment in patients with active rheumatoid arthritis who were unresponsive to methotrexate therapy.
The inhibition of IL-6 signaling by baricitinib may in part explain how the drug affects lipoprotein metabolism and particle distribution, and was explored in a separate analysis of this trial in 2017 [39]. Changes in lipoprotein particle size and particle number, as well as changes in lipid profile, were assessed at weeks 12 and 24 in association with clinical efficacy.
Interestingly, the study found that following 2 weeks of baricitinib, patients experienced dose-related increases in serum lipid levels. Increases in HDL cholesterol, LDL cholesterol, and triglyceride levels were observed and remained elevated through 24 weeks. The observed increase in LDL cholesterol was associated with a shift in LDL particle size to large LDL particles [39].
Furthermore, treatment with baricitinib resulted in an increase in apolipoprotein A-I and a reduction in the serum amyloid A (SAA) content of HDL particles, which rendered HDL particles more efficient for reverse cholesterol transport.
These increased HDL cholesterol levels correlated with improved clinical outcomes at 12 weeks, as evidenced by improvement in Disease Activity Score 28-joint assessment using the Simplified Disease Activity Index [SDAI] and C-reactive protein level (DAS28-CRP) [39]. This association provides support for a potential relationship between increases in HDL levels and reduction of disease activity scores and inflammation in patients with rheumatoid arthritis. No such relationship was observed in patients treated with placebo.
Further evaluation of this trial using population pharmacokinetic/pharmacodynamics models to determine dose/exposure-response relationships was performed to assess the efficacy and safety of different doses of baricitinib for a potential phase III trial in the future [40].
The primary efficacy endpoint assessed was the ACR20/50/70 and the primary safety endpoint assessed was anemia. The results of the study showed a faster onset of ACR20 improvement in the 4 mg and 8 mg dosed groups along with a slight increase in incidence of anemia for the 8 mg dose group.
Reumatologia 2020; 58/6 The evaluation concluded stating that 4 mg QD was the optimal dosing from a risk-benefit standpoint, and that 2 mg QD may potentially be efficacious, however this should be explored further in a phase III trial. Importantly, this study also found no benefit to BID dosing from a safety and efficacy standpoint.
Another phase IIb randomized control trial in 2016 demonstrated similar effectiveness of baricitinib treatment in Japanese patients with rheumatoid arthritis who were concurrently taking methotrexate [41]. The study enrolled 145 patients. The primary endpoint was the ACR20 response rate of patients taking 4 mg or 8 mg daily of baricitinib compared to placebo at 12 weeks. A significantly greater proportion of patients in the 4 mg or 8 mg group responded to treatment relative to placebo (77% vs. 31%) at 12 weeks. Improvement of symptoms including remission and physical function was demonstrated as early as 2 weeks in the 4 mg and 8 mg group. The adverse effects of baricitinib treatment were limited. The 1 mg, 2 mg, and 4 mg baricitinib group showed similar rates of adverse events as compared to placebo.
However, there was a slightly increased rate of adverse events and abnormal laboratory values in the 8 mg group. There were 3 serious adverse events (SAEs) reported during the study period: cholecystitis in the placebo group, acute pancreatitis in the 2 mg baricitinib group, and cataract in the 8 mg baricitinib group. It is unclear if these SAEs were related to the experimental drug.
The safety of baricitinib in this Japanese cohort showed similar results to other trials with non-Japanese patients, with no malignancies, serious infection, tuberculosis, pneumocystis pneumonia, herpes zoster cases, or GI perforations reported. This trial justifies the need to further study the benefit to risk ratio of 2 mg and 4 mg baricitinib treatment in patients with rheumatoid arthritis concurrently taking methotrexate. This 12-week study was then extended to a 64-week study [42]. In this extension, patients that were originally randomized to placebo, 1 mg, or 2 mg doses were re-randomized to either 4 mg or 8 mg doses following the conclusion of the 12-week study. However, after analysis of data from other phase II trials occurring at this time, a decision was made to switch patients from 8 mg to 4 mg, so that all patients were taking 4 mg of baricitinib by the conclusion of the extension study. Of the 142 patients who completed the 12-week study, 109 of these patients continued and completed the extension study. During the extension period, patients who were originally assigned to the placebo group noted a significant improvement in ACR20, while those originally treated with baricitinib successfully maintained or improved on the progress made in the first 12 weeks.
No changes in safety of baricitinib were noted with a longer period of treatment. Notably, although no incidences of herpes zoster were noted in the first 12 weeks of the study, 11 patients (7.8%) developed herpes zoster during the extension and dropped out of the study. This was the most common reason for discontinuation of the study overall.
While this may seem remarkable, reactivation of herpes zoster is a common side effect of disease-modifying antirheumatic drug (DMARD) therapy and it is even recommended by the American College of Rheumatology for patients to get the herpes zoster vaccine prior to starting DMARD therapy. This trial highlights the beneficial effects of baricitinib in patients concurrently taking methotrexate and shows no difference in safety and efficacy between short-and long-term dosing of baricitinib.
A phase III RA-BEACON trial
A phase III RA-BEACON trial in 2016 involving 527 patients compared the use of 2 mg or 4 mg of daily baricitinib with placebo at 24 weeks [43]. The primary endpoint used the ACR20 response to determine clinical improvement at 12 weeks. The study population consisted of patients > 18 years old with moderate-to-severe RA who had discontinued prior treatment with conventional tumor necrosis factor inhibitors (TNFis) or biologic disease-modifying antirheumatic drugs (bDMARDs) due to insufficient response or intolerance after > 3 months [44]. The results showed significantly more patients in the 4 mg baricitinib group had an ACR20 response than the placebo group (55% vs. 27%). Additionally, the 4 mg group as compared to the placebo showed a difference in response to the HAQ-DI and DAS28-CRP score, but not the SDAI score.
The rates of adverse events at 24 weeks were higher among the 2 mg and 4 mg group (71% and 77%) relative to the placebo (64%). The most common cause of adverse events were infections, including respiratory infections, bronchitis, and urinary tract infections. However, the rates of serious adverse events were similar throughout 24 weeks: 4% in the placebo group, 10% in the 2 mg group, and 7% in the 4 mg group. The two baricitinib treatment groups were also associated with a drop in neutrophil count versus placebo at 24 weeks (-560, -630 vs. +130). The serum creatinine and LDL levels were already elevated in the baricitinib groups compared to placebo. These results show evidence of clinical improvement with baricitinib treatment in a study population with disease particularly refractory to multiple biologic therapies.
Furthermore, the efficacy of baricitinib demonstrated in RA-BEACON was reflected by clinically significant Reumatologia 2020; 58/6 changes in patient reported outcomes (PROs). These are core measures established by the American College of Rheumatology that assess disease activity in clinical trials and may improve the patient-physician relationship by addressing patient concerns such as onset of drug action, sustainability or risk of relapse, impact on quality of life, and efficacy plateau.
Patient reported outcomes utilize standardized scales (PROs) measuring fatigue, duration of morning joint stiffness, severity of worst joint pain, physical and mental quality of life, Short Form-36, work productivity and impairment of daily activities, and physical function. RA-BEACON used a minimum clinically important difference (MCID) to assess the clinical relevance of changes to the aforementioned scores. Of note, baseline PROs were similar across treatment groups and represented significant disease burden. PROs were assessed at baseline as well as weeks 1, 2, 4 and every 4 weeks thereafter to week 24 [44]. Over 24 weeks, the majority of PROs significantly improved among patients receiving baricitinib compared with placebo, with patients receiving baricitinib 4 mg demonstrating a faster and greater magnitude of change than the baricitinib 2 mg group. In addition, the PRO improvements were not influenced by the type or number of previous bDMARDs used.
Furthermore, the groups receiving baricitinib reported significant improvement in morning joint stiffness (MJS) duration and fatigue as well as in regular activity (WPAI-RA), EQ-5D scores, and pain when compared to placebo. In contrast, no significant differences were observed for the SF-36 MCS measure between patients treated with baricitinib compared with placebo [44].
A post-hoc analysis was performed following the RA-BEACON trial to examine sub-groups of the patient population and determine if history of prior bDMARD use had an effect on the efficacy and safety of baricitinib [45]. The study examined ACR20s in multiple sub groups of the 527 patients and found no interactions with age, weight, geographic region, disease duration, seropositivity, corticosteroid use, number of prior bDMARDs used, number of prior TNF inhibitors used, or type of TNF inhibitors used, with 2 mg or 4 mg baricitinib. Notably, the only change in safety was an increased risk of infections and serious adverse events in patients who had taken three or more bDMARDs prior to baricitinib treatment. However, the small number of patients in the subgroup with a history of three or more bDMARD use limits the significance of this finding.
A phase III RA-BUILD trial
The phase III RA-BUILD studied the effectiveness of 2 mg or 4 mg of baricitinib therapy compared to placebo in 684 randomized patients who also did not respond to conventional synthetic disease-modifying antirheumatic drugs (csDMARD) [46]. The study population was naïve biologic DMARD treatment. The primary endpoint was the ACR20 at 12 weeks. Most of the study population was receiving simultaneous csDMARD therapy and only 7% of the patients were not taking concomitant csDMARD therapy. A statistically significant increase in the ACR20 response rate for patients taking 4 mg of baricitinib (62%) compared to placebo (39%) was seen at 12 weeks. There was no difference in treatment effect of patients taking concomitant csDMARDs or no csDMARD. Additionally, a reduction in radiographic progression of structural joint damage was observed at 24 weeks in both groups receiving baricitinib therapy relative to placebo. There was no significant difference in the rate of adverse events between both baricitinib groups (67%, 71%) and placebo (71%).
The baricitinib groups showed an increased incidence of low neutrophil count and elevated LDL and HDL values. However, similar rates of serious infections were observed in placebo, 2 mg and 4 mg baricitinib groups (2%, < 1%, and 2%). Both baricitinib groups also exhibited a small increase in the serum creatinine.
The results of this trial demonstrated symptomatic relief and a beneficial effect on joint damage without significant side effects in patients who failed to benefit from prior csDMARD therapy.
A phase III RA-BEAM trial
In 2017 the RA-BEAM phase III trial was the first trial to investigate the use of baricitinib versus another biologic, adalimumab, in patients worldwide taking background methotrexate therapy [47]. Adalimumab is an anti-tumor necrosis factor a monoclonal antibody, a biologic that was used as standard of care in combination with methotrexate for patients with moderate to severe RA. The primary endpoint was the response to ACR20 at 12 weeks. A significant increase in the response rate was demonstrated in 4 mg of daily baricitinib use compared to placebo (70% vs. 40%) as well as compared to 40 mg of adalimumab (70% vs. 61%). Structural joint damage was also decreased at 24 weeks in both the baricitinib and adalimumab treatment groups relative to placebo. Adverse events through week 24 occurred at a rate of 60% in the placebo group, 71% in the baricitinib group, and 68% in the adalimumab group. The baricitinib and adalimumab groups both exhibited a decrease in neutrophil count and an elevation in aminotransferase, crea tinine, LDL, and HDL levels as compared to placebo.
This groundbreaking trial showed the superiority of baricitinib plus methotrexate to what was considered standard of care treatment at the time, adalimumab plus methotrexate.
A phase III RA-BEGIN trial
The 52-week phase III RA-BEGIN trial was the first to investigate the difference between methotrexate monotherapy, baricitinib monotherapy, and the combination of baricitinib and methotrexate therapy [48]. The primary endpoint assessed the ACR20 response rate at 24 weeks between baricitinib monotherapy and methotrexate monotherapy. At 24 weeks the response rate of baricitinib monotherapy (77%) was higher than methotrexate monotherapy (62%). Symptomatic improvement was observed in the baricitinib group as quickly as 1 week in comparison to methotrexate alone.
Similar results to baricitinib monotherapy was seen in baricitinib plus methotrexate (MTX) combination therapy. The baricitinib plus MTX group showed significant superior radiographic benefit relative to MTX monotherapy, but not baricitinib monotherapy. At 52 weeks the rates of serious adverse events were similar between MTX monotherapy, baricitinib monotherapy, and baricitinib plus MTX combination therapy (10%, 8%, and 8%, respectively). There were more non-serious infections recorded in the baricitinib plus MTX group than either baricitinib or MTX monotherapy.
This trial supported the known benefits of MTX monotherapy in RA patients, while demonstrating superior outcomes in patients taking baricitinib alone or in combination with MTX. Additional benefit in radiographic evidence and measures of inflammation was shown in baricitinib plus MTX compared to baricitinib alone. However, combination therapy increased the risk of adverse events, such as non-serious infection.
Furthermore, RA-BEGIN utilized PROs to assess the safety and efficacy of baricitinib in adults with moderateto-severe RA. The study found comparable effects on PRO improvements in the baricitinib monotherapy and baricitinib +MTX groups when compared to MTX monotherapy, with both baricitinib regimens appearing consistently more effective than MTX alone. Over 24 weeks, the majority of PROs significantly improved among patients receiving baricitinib compared with placebo, with patients receiving baricitinib 4 mg demonstrating a faster and greater magnitude of change than the baricitinib 2 mg group. In addition, the PRO improvements were not influenced by the type or number of previous bDMARDs used. Interestingly, the majority of PRO measures of joint pain, tiredness, duration of MJS, pain, fatigue, physical function, HRQOL, and PtGA improved to a greater extent in many or all time points measured in patients in both baricitinib groups compared to MTX monotherapy [48].
Furthermore, the groups receiving baricitinib reported significant improvement in MJS duration and fatigue as well as in regular activity (WPAI-RA), EQ-5D scores, and pain when compared to placebo. In contrast, no significant differences were observed for the SF-36 MCS measure between patients treated with baricitinib compared with placebo [44]. These statistically significant improvements in PROs suggest that baricitinib alone or combined with MTX may be an alternative therapy for patients for whom MTX monotherapy is undesirable.
Of note, a post-hoc analysis was performed following phase 3 of the RA-BEGIN study evaluating structural damage progression (as measured via radiographs of the hands and feet at weeks 12, 24 and 52) between each of the treatment groups [49]. The study looked particularly at two markers for RA disease activity, the Disease Activity Score for 28-joint count with serum high-sensitivity C-reactive protein (DAS28-hsCRP) and Simplified Disease Activity Index (SDAI). Typically, acceptable mana gement of RA symptoms is indicated by a DAS28-CRP score < 3.2 and a SDAI score < 11. This study found that if patients were able to stay below those thresholds, the risk of structural damage was lower in those treated with baricitinib alone or combination therapy with MTX. In patients with a DAS28-CRP > 3.2 or a SADI > 11, only combination therapy with MTX and baricitinib was sufficient to decrease the risk of structural damage.
The results of the four phase three trials assessing the efficacy and safety of baricitinib (RA-BEGIN, RA-BEAM, RA-BUILD, and RA-BEACON) showed significant improvement in ACR20 with 4 mg baricitinib when compared to placebo, MTX, and adalimumab standard of care treatment at 12 or 24 weeks. A subgroup analysis was performed in July of 2018 to see if the 394 Japanese patients treated across these trials showed similar benefit to treatment as the general study population [50]. The authors of this subgroup analysis found that across all four trials, equivalent improvements were made in ACR20 in Japanese patients when compared to the overall study population. The study did highlight differences in the Japanese population from other participants in these studies, including a lower body weight and lower average dose of MTX. However, an equal response to baricitinib was noted across all trials despite these differences.
The authors concluded that while this data is promising, long-term effects of baricitinib in this study population have yet to be published, noting that 328 of the 394 Japanese patients are enrolled in the 84 month long RA-BEYOND study that is expected to conclude in 2024.
Trials testing drug-drug interactions
Baricitinib undergoes active renal tubular secretion, and is predominantly eliminated from the body unchanged in urine [51]. Baricitinib's secretion is dependent on the basolaterally expressed OAT3 transporter and the apically expressed P-GP, BRCP, and MATE2-K transporters [51].
In the presence of probenecid, a strong OAT3 inhibitor, CL r and CL/F of baricitinib decreased 69% and 51% respectively, and the AUC (0-∞) of baricitinib doubled in healthy subjects [51].
Additionally, diclofenac and ibuprofen, OAT3 inhibitors with less inhibition potential than probenecid, were evaluated using physiologically based pharmacokinetic (PBPK) modeling and in vitro inhibition data to predict the inhibition potential for the OAT3 mediated secretion of baricitinib. The aforementioned study used the in vitro IC50 value of 4.4 μM to reproduce the renal clearance of baricitinib and the inhibitory effect of probenecid. Using ibuprofen and diclofenac in vitro IC50 values of 4.4 μM and 3.8 μM toward OAT3, 1.2 and 1.0 AUC (0-∞) ratios of baricitinib were predicted, suggesting that co-administration of diclofenac or ibuprofen is safe as it does not cause clinically relevant drug-drug interactions with bari citinib [51]. These predictions are relevant for furthering our understanding of the safety profile of new RA therapies.
To sum up baricitinib, currently a second line drug for RA, is a novel oral drug that is based on the above mentioned research. It is a once-a-day drug that is orally available and targets the JAK/STAT signaling pathway by acting as a JAK inhibitor. It is approved in Europe, Japan and the US for treatment of RA after failure of treatment with methotrexate (MTX) and TNF inhibitors. Acting upstream in the JAK/STAT pathway, baricitinib inhibits specifically JAK1 and JAK2 and decreases the expression of IL-6 and TNF-α, as well as other pro-inflammatory cyto kines such as IL-10, IL-8, IFN-α. While decreasing inflammation, baricitinib interferes with other pathways involved with JAK/STAT signaling and may induce elevated levels of LDL and HDL and decreased hemoglobin production.
Phase I and II trials conducted so far proved the safety and efficacy of baricitinib in RA, including with concurrent use with MTX. The accepted dose currently is that of daily 4 mg which appears to balance the side effects (anemia, hypercholesterolemia, re-activation of latent infection) with the benefit (as measured by ACR20) and improvement in patients' symptoms at 12-weeks. The RA-BEACON Phase III trial re-iterated these findings, found no safety increased in 2 mg over 4 mg dosing and proved that 4 mg dosing significantly improved symptoms after 12-weeks of treatment, as measured by ACR20. Post-hoc analysis pointed to extensive prior treatment as an increased risk of latent infection reactivation, but otherwise could not point to any other safety or efficacy factors.
The RA-BUILD tested 2 mg and 4 mg dosing and used ACR20 at 12-weeks for end point. The results, while not as impressive as RA-BEACON, similarly demonstrat-ed significant relief of symptoms in the baricitinib over placebo, with or without concurrent DMARD use. Importantly, this study also showed decreased joint damage at the 24-weeks mark. RA-BEAM and RA-BEGIN compared baricitinib to MTX and other biologics and found some evidence to support its used over alternative treatments and not just in addition to current DMARD treatment.
Conclusions
There is convincing evidence to support the use of baricitinib in RA, especially in addition to current treatment with DMARD. It does carry significant risks, and those will have to be weighed against its efficacy. Phase IV and aftermarket data is still required to determine any long term risks of using baricitinib, the long term effectiveness of this treatment, and rare potential side effects. It is also still unclear how baricitinib use affects joint damage in the long term and whether or not its main effect is on disease symptoms or not. Further research into the pathophysiology of RA will likely present even further opportunity to find treatment targets in this common inflammatory disease.
The authors declare no conflict of interest. | 2021-01-07T09:10:08.239Z | 2020-12-23T00:00:00.000 | {
"year": 2020,
"sha1": "5af4fd50510e99555183c5a2a7c459db3dcd0e86",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-18/pdf-42775-10?filename=Baricitinib%20for%20the%20treatment.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1a3263a10f01144e7d74b28a381b265505dcf17",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265512332 | pes2o/s2orc | v3-fos-license | Geographic weighted regression analysis of hot spots of modern contraceptive utilization and its associated factors in Ethiopia
Background Utilization of modern contraceptives is a common healthcare challenge in Ethiopia. Prevalence of modern contraception utilization is varying across different regions. Therefore, this study aimed to investigate Geographic weighted regression analysis of hotspots of modern contraceptive utilization and its associated factors in Ethiopia, using Ethiopian Demographic and Health Survey 2016 data. Methods Based on the 2016 Ethiopian Demographic Health Survey data, a total weighted sample of 8,673 women was included in this study. For the Geographic Weighted Regression analysis, Arc-GIS version 10.7 and SaTScan version 9.6, statistical software was used. Spatial regression was done to identify factors associated with the hotspots of modern contraceptive utilization and model comparison was carried out using adjusted R2 and AICc. Variables with a p-value < 0.25 in the bi-variable analysis were considered for the multivariable analysis. Multilevel robust Poisson regression analysis was fitted for associated factors since the prevalence of modern contraceptive was >10%. In the multilevel robust Poisson regression analysis, the adjusted prevalence ratio with the 95% confidence interval was reported to declare the statistical significance and strength of association. Result The prevalence of modern contraceptive utilization in Ethiopia was 37.25% (95% CI: 36.23%, 38.27%). Most of the hotspot areas were located in Oromia and Amhara regions, followed by the SNNPR region and Addis Ababa City administration. Single Women, poor Women, and more fertility preference were significant predictors of hotspots areas of modern contraceptive utilization. In the multivariable multilevel robust Poisson regression analysis, Women aged 25–34 years (APR = 0.88, 95% CI: 0.79, 0.98), 35–49 years (APR = 0.71, 95% CI: 0.61, 0.83), married marital status (APR = 2.59, 95% CI: 2.18, 3.08), Others religions (APR = 0.76, 95% CI: 0.65, 0.89), number of children 1–4 (APR = 1.18, 95% CI: 1.02, 1.37), no more fertility preference (APR = 1.21, 95% CI: 1.11, 1.32), Afar, Somali, Harari, and Dire Dawa: (APR = 0.42, 95% CI: 0.27, 0.67), (APR = 0.06, 95% CI: 0.03, 0.12), (APR = 0.78, 95% CI: 0.62, 0.98), and (APR = 0.75, 95% CI: 0.58, 0.98), respectively. Amhara region (APR = 1.34, 95% CI: 1.13, 1.57), rural residence (APR = 0.80, 95% CI: 0.67, 0.95) High community wealth index (APR = 0.78, 95% CI: 0.67, 0.91) were significantly associated with modern contraceptive utilization. Conclusion and recommendation There were significant spatial variations of factors affecting modern contraceptive use across regions in Ethiopia. Therefore, public health interventions targeting areas with low modern contraceptive utilization will help to increase modern contraception use considering significant factors at individual and community levels.The detailed map of modern contraceptive use cold spots among reproductive age group and its predictors could assist program planners and decision-makers to design targeted public health interventions.Government of Ethiopia must develop more geographic targeted strategies for improving socioeconomic status of women and availability & accessibility of health facilities in rural areas of the countries.
Background
Generally contraceptive methods are broadly classified as modern contraceptives (CPs) and traditional contraceptive (TC) methods.Modern contraceptive have been identified as an effective method for fertility reduction, and are thus being widely promoted to slow rapid population growth, particularly in developing countries [1,2].Promoting access to modern contraceptives among women of reproductive age has also proven to be an effective public health intervention to improve maternal and child health outcomes [3,4].Worldwide, modern contraceptives are important in fertility contro l [5].In low income countries, utilizing modern contraceptives have a clear effect on the health of women, children and families.For instance, contraceptives are estimated to prevent 2.7million infant deaths and the loss of 60million healthy lives a year worldwide [6].Promoting contraceptives in nations with high birth rates prevents 32% of all maternal fatalities and roughly 10% of infant deaths.Modern contraceptives also make a huge contribution to the achievement of universal primary schooling, female empowerment, and in reducing poverty and hunger [7].Family planning is also important in preventing unintended pregnancies and unsafe abortions [8,9].
Despite its importance, access to and utilization of modern contraceptives vary worldwide.Women in developed countries have better access to and use of contraceptives compared with women in developing countries [8].In a study from 2010 to 2014, it was reported that the global burden of unintended pregnancies was 44%; the rate of unintended pregnancies is substantially higher in developing countries compared with developed regions [10].There is high unmet need for modern contraception in low income countries and this may contribute to higher rates of unintended pregnancies.For instance, in sub-Saharan Africa, the prevalence of contraceptive use among women of reproductive age is only 17% [11].
Similarly, the utilization of modern contraceptives is a common healthcare challenge in Ethiopia [12].Prevalence of modern contraception use is varied across different regions.For instance, the Somali region accounts for the lowest rate of modern contraceptive use (1.4%), compared with Addis Ababa (50.1%) [12].
Whether they are married or not, the use of modern contraceptive among adolescent girls and young women (AGYW) had very low compared to other age groups in the developing world.This means, the contraceptive need of the AGYW deserve further international interventions [13].Regardless of the introduction of modern contraception method over the last some decades, the level of utilization was also not adequate among all reproductive age women.In Ethiopia only 36% of married women in reproductive age (15-49 years) used modern contraceptives [12].By giving attention for this, the advantage of the modern contraceptive use among reproductive age is very vital for designing interventions, plans, and policies to address premature age pregnancies and other related issues.It is also useful to reduce unsafe abortions, maternal death, and sexually transmitted infections (STI).Low use of modern contraceptives among reproductive age group is the result of several contributing factors [14,15].
In Ethiopia, Modern contraceptives are provided without charge in each government health facility to encourage utilization.Determinant factors associated with modern contraception method utilization are; number of living children, women's current age, age at first birth, education, marital status, terminated pregnancy, religious affiliation, media exposure about family planning, wealth index, working status, fertility preference, and distance to health facility, are the main determinant factors for utilization of modern contraceptives [16,17].Community level factors are a variable that are shared by the community which affects either positively or negatively as a community rather than individual level and major reason to consider multilevel analysis of this study.Community-level factors were found to be associated with contraceptive utilization are; region, community level wealth index, place of residence and community level media exposure were community-level determinants in Nigeria and Mozambique [18,19].
The previous study was conducted onlyconsidering married women as a study population and without considering of geographical variationofvariablesacross Ethiopian regions.But there is evidence that indicates variation of variables influence among regions in health service utilization and unmarried women aresexually active almost the same as married women [20].Therefore, this study aimed to investigate spatial regression analysis of modern contraceptives use among women of reproductive age in Ethiopia, irrespective of their marital status and identify the potential factors associated with the use of modern contraceptives and considering geographic variation of variables.As a result this study result will facilitate evidence based decision making by complementinglimitations of the previous studies.
The findings of this study will be useful for health planners, policymakers, and non-governmental partners who are working to improve the health and well-being of women of reproductive age in Ethiopia.Besides, mapping hotspot areas of modern contraceptive use, it will provide a deeper understanding of the impacts of already implemented interventions in each region of the country.Furthermore, it will assist in designing programs and strategies to increase coverage, quality, and equity of women's reproductive health at the country level.
Study design, data source and period
A population based cross sectional study was conducted from January 18, 2016, to June 27, 2016 using EDHS 2016 data set.The survey data was accessed from the measure Demographic and Health Survey (https://dhsprogram.com/).
Study area
The EDHS 2016 is fourth nationwide survey conducted in Ethiopia.Ethiopia is the second populous country in Africa next to Nigeria with a population of more than one hundred million.Administratively, Ethiopia is divided into nine geographical regions (Tigray, Afar, Amhara, Oromia, Somali, Benishangul-Gumuz, SNNPR, Gambella and Harari) and two administrative cities, Addis Ababa and Dire Dawa.Ethiopia shares the boundaries in the North with Eritrea, in the South with Kenya and Somalia, in the West with South Sudan and North Sudan, in the East with Djibouti and Somalia.The modern contraceptive methods are freely available without any fee for reproductive-age women, which are provided in all public health facilities in Ethiopia.
Source and study population
Source population.The source population was all Women in the reproductive age (15-
Inclusion and exclusion criteria
Inclusion criteria.All Women in the reproductive age group were included in the study.Exclusion criteria.Women never had sex, pregnant women and Enumeration Areas (EA) with zero longitude and latitude were excluded.
Sampling procedures
In this study, interviews were completed for 15,683 women.A total of 8,673 eligible women were included after the necessary exclusion criteria were carried out (Fig 1 ).A Woman was asked whether they had obtained any modern contraceptive method in the 5 years preceding the survey.Weighted number was used to restore the representative of sample data.
Study variables
Outcome variable.Women received any modern contraceptive method (yes/no) Independent variables.Individual level factors included were age, religion, marital status, working status of women, education status, wealth index, number of living children, terminated pregnancy, age at first birth, media exposure, fertility preference, and distance to health facility.
Community level factors include community wealth index, residence (urban and rural), region, community media exposure.
The shape files of Ethiopia regional, zonal, district level shape files were obtained from the Central Statistical Agency of Ethiopia.
Operational definitions
Modern contraception method: A women/men was considered to be using modern contraception if she/he used any of the following modern contraceptive methods: female/male sterilization, contraceptive pills, IUD, injectable, implants (norplants), diaphragm, lactational amenorrhea method(LAM), standard days method(SDM), emergency contraception and male/female condom [21].
Community wealth index: This was generated by aggregating household wealth index at cluster/EA level.It was categorized as high community poverty when the proportion of women whose wealth index below the national median value (.3333333) and low community poverty when the proportion of women whose wealth index above the median value (.3333333) was lower than the median values as this variable was not normally distributed.
Media exposure: is defined as household who had listening to radio or watching television or reading newspaper/magazine at least once a week were considered as exposed to media [12].
Community media exposure: if proportion of women who had listing to radio, watching television, and reading newspaper/magazine was below the median were considered as "low media exposure" (.1666667) and above the median were considered as "high media exposure" (.1666667).
Hot spot: Areas with high modern contraception utilization Cold spot: Areas with low contraception utilization
Data management and analysis
Data extraction, coding, and analysis were done using STATA version 16, Arc-GIS version 10.7 and SaTScan version 9.6 statistical software's.Socio-demographic characteristics of the study participants were calculated in frequency and percentage.To re-establish the representativeness of the data weighted data was used for analysis.To consider the hierarchical nature of EDHS data multilevel Poisson regression analyses were employed and clustering effect was assessed using Intra-class Correlation Coefficient (ICC),clustering effect was considered because of ICC >10%.In this case, the prevalence ratio is the best measures of association to minimize over estimation of association between outcome and independent variables.Variables with a p-value of < 0.25 at the bi-variable multilevel Poisson regression analysis were included into the final model of multivariable regression analysis model, in which prevalence ratio with 95% confidence intervals were estimated to identify independent variables of modern contraceptive use.To declare statistical significance, p-values less than 0.05 were used.
Spatial analysis
The spatial autocorrelation (Global Moran's I) statistics was used to evaluate whether the modern contraception method utilization distribution is random or not at the national level.Moran's, I value close to −1 indicates that modern contraception method utilization is dispersed, whereas Moran's I close to +1 indicates modern contraception method utilization is clustered and if Moran's I close to 0 revealed that modern contraception method utilization is randomly distributed.A statistically significant Moran's I (p < 0.05) value showed that modern contraception method utilization is non-random [22,23].
Hot spot analysis was computed to measure how spatial autocorrelation varies over the study location by calculating Getis-ordGi* statistic for each area.To determine whether there was substantial clustering, the Z-score and p-values were calculated.Statistical values with high Getis-ordGi* indicate "hotspot" whereas low Getis-ordGi* means a "cold spot".Empirical Bayesian kriging interpolation was used to predict modern contraceptive use in un-sampled areas.Spatial scan statistical analysis was used to perform cluster analysis to detect the more likely clusters by computing the relative risk (RR) and testing the statistical significance [24].
Spatial regression
Ordinary Least Squares (OLS) regression; the spatial regression modeling was performed to identify predictors of the spatial heterogeneity of modern contraceptive method utilization.OLS is a global statistical model for testing and explaining the relationship between the outcome and explanatory variables [25].Also used as a diagnostic tool for selecting the appropriate predictors for the Geographic Weighted Regression (GWR) model [26].Different assumptions werecheckedbefore jumping to GWR analysis like non-stationarity, residual spatial autocorrelation, model bias and multicollinearity was assessed using Koenker test, Moran's I, Jarque-Bera statistics, and VIF respectively.
GWR is a local spatial statistical technique that assumes the non-stationarity in relationships between the outcome and predictors across EAs [26].The GWR analysis was employed with evidenceofKoenker test statistically significant.In the GWR analysis, the coefficients of the predictors take different values across the study area.Mapping the GWR coefficients associated with the predictors and provides insight for targeted interventions.Best fitted model for the data was identified using corrected lowest AIC and higher adjusted R 2 .
Result
A total weighted sample of 8,673 reproductive age Women included.The mean age of the study participants was 29.54 (SD of ±7.73) years.More than half (44.13%) of the women were 25-34 age group.The vast majority (85.7%) of the women were married.More than one thirds (37.48%) of the respondent were in the Oromia region and about7043 (81.21%) were from the rural areas.Regarding maternal education, the majority (58.51%) of the respondents didn't attain formal education.Nearly half (44.25%) of the respondents were Orthodox religion followers.Majority of the respondents (41.63%) were rich and more than half of the respondent (58.24%) were had more fertility preference (Tables 1 and 2).
Spatial distribution of modern contraceptive utilization
A total of 623 clusters were considered for the spatial analysis of modern contraceptive utilization, Points on the map represent the clusters and their corresponding proportion of modern contraceptive utilization.The red color indicates areas with low proportion of modern contraceptive utilization whereas the blue color represents areas with high proportion of modern contraceptive utilization.The highest prevalence of modern contraceptive utilization among reproductive age women was observed in Oromia, Amhara, Addis Ababa, and SNNPR (Fig 2).
Spatial autocorrelation of modern contraceptive utilization
The spatial distribution of modern contraceptive use among reproductive age Women in Ethiopia was non-random (Global Moran's I = 0.32; Z-score = 19.78;p-value 0.0000) (Fig 3).
Hotspot analysis of modern contraceptive utilization
Most of the hot spot areas (blue color), those with high modern contraceptive prevalence rates, were located in Oromia and Amhara region, followed by the SNNPR region and Addis Ababa city administration.On the other hand, the majority of the cold spot areas (red color), those with low modern contraceptive prevalence rates, were located in the Gambela, Somali, and Afar regions followed by Tigray.This clustering was supported by the Getis-ordGi* statistic when conducting the spatial analysis (Fig 4).
Spatial interpolation of modern contraceptive utilization
Using the Empirical Bayesian kriging interpolation, the green ramp color on the map indicates the predicted highest modern contraceptive utilization rates in Amhara, Tigray, Afar, Oromia, Northern Somali and Northern BeneshanguGumuz regions.However, the dim red color indicates low modern contraceptive utilized areas predicted in North and Southern Somali, Addis Ababa, Southern BeneshangulGumuz, Northern Afar and south and Northern Oromia (Fig 5).
Spatial scan statistical analysis
The spatial scan statistics identified a total of 211 high, medium and low performing spatial clusters of modern contraceptive utilization.Of these, 180 clusters were most likely primary clusters (high performing clusters) accounting for 46.7%, 15 were secondary clusters (medium performing clusters) accounting for 64.9% and 16 were third clusters (low performing clusters) accounting for 64.3%.The green colors (rings) indicate the most statistically significant spatial window which contains primary clusters located in the Amhara, Addis Ababa, and Benshagul-Gumuz regions.This was centered at 10.575333 N, 37.480816 E with 293.34 km radius, with a
Ordinary least square (OLS) regression analysis
The OLS model was computed to diagnose multicollinearity between the independent variables and all variables were less than 7.5.In the OLS analysis, the model explained about 38% (adjusted R 2 = 0.38) of the variation in modern contraceptive utilization among women with AICc = -307.87.Koenker test is used to check the relationships in our model non-stationary or not, the Koenker statistics were statistically significant in our model, indicates that the relationship between the explanatory variables and the outcome variable was non-stationary or heterogeneous across the study areas.Since, Koenker statistics were significant robust probabilities was used to screen out significant predictors; proportion of single women, proportion of poor women, and proportion of women more fertility preference were significantly associated with the prevalence of modern contraceptive utilization among women in the OLS model (Table 3).The Joint F-statistics and Wald statistics were highly significant (p<0.01),which proves that the model was statistically significant.The spatial distribution of residuals was not normally distributed as the JarqueBera statistics were statistically significant (0.016310*).Spatial autocorrelation was done and the residuals were not normally distributed (Moran's Index = 0.18) (p = 0.0000).This indicates that GWR should be applied.Since, the Koenker statistics showed the non-stationarity in the relationship as it assumes the spatial heterogeneity of the relationship between independent and dependent variables across space.The same numbers of independent variables were used for GWR analysis.
Geographically weighted regression (GWR) analysis
The result of GWR analysis showed that there was a significant improvement over the global model (OLS).The AICc value decreased from -307.87 to -411.61.This implies that GWR best explains the spatial heterogeneity of modern contraceptive utilization among women, the difference was 103.74.In addition, the model's ability to explain modern contraceptive utilization has been improved 13% by using GWR analysis.Since, the adjusted R 2 was 0.51, (Table 4).In the geographically weighted regression analysis, the proportion of single women, the proportion of poor women, and the proportion of more fertility preference were significant predictors of hotspots areas of modern contraceptive utilization among women.The above three factors were considered as independent variables in the GWR analysis.Since, it was significant in the OLS analysis.
Multilevel analysis
A multilevel Poisson regression analysis was used to analyze the effect of women's individual characteristics and community-level factors in determining women's use of modern contraceptives.The null model 13.42% (95% CI: 11.39%, 15.76%) of the total variance in the prevalence ratio of modern contraceptive utilization was accounted by between cluster variations of characteristics.The between cluster variability reduced over successive models from 13.42% in the null model into 9.77% (95% CI: 8.10%, 11.75%) in individual-level only model, 9.31% (95% CI: 7.77%, 11.11%) in community-level factors only model and 8.44% (95% CI: 6.98%, 10.17%) in the combined model.Thus, the combined model of individuallevel and community-level factors has been preferred for predicting women's modern contraceptive utilization.
Factors associated with modern contraceptive utilization
First bi-variable Poisson regression analysis was conducted to identify variables that were significant at p-value of 0.25.In bi-variable Poisson regression analysis individual level factors: age, religion, marital status, working status, educational status, wealth index, number of living children, media exposure, fertility preference, and distance to health facility were found significantly associated with modern contraceptive utilization.Community level factors: region, residence, community level wealth index, and community level media exposure were statistically significant factors of modern contraceptive utilization in the bi-variable analysis.
In the multilevel Poisson regression with a robust variance; age, religion, marital status, and fertility preference were found significantly associated with modern contraceptive utilization.Community level factors: region, residence, and community level wealth index were significantly associated with modern contraceptive utilization (Table 5).
Discussion
In this study, we investigated the individual and community-level predictor's association with modern contraceptive use among women in the reproductive age group in Ethiopia and the prevalence of modern contraceptive use was 37.25%.This is higher than a study conducted inMetekel Zone 18.6% [27] and Ghana [1].However, the finding of our study was lower than study conductedinKenya 51% [28].The possible reasons for this variation could be sociodemographic characteristicsof the study participants.Only 41.9% of respondents in this survey completed a formal education, a lower percentage than Kenya's 93% [28].Additionally, having access to the media may promote the use of contemporary family planning methods, although in this study, just 23.73% of women have such access [29].
In the spatial regression analysis, marital status, poverty, and fertility preference was significant predictors of hotspot areas of modern contraceptives utilization.An increased proportion of single women decrease the odds of modern contraceptive utilization in Somali, Harari, Dire Dawa, and Oromia regions.The reason might be importance of couple motivation through education, self-reliance of married women, and male participation in reproductive health issues [27,30].This study revealed that married women had a better tendency to use modern contraception than single women.The result was similar with the studies in Ethiopia [35], Ghana [1], but contradicts with study done in Mali [36].The possible explanation might be the exposure to maternal and birth control services during antenatal and postnatal cares.Women previous exposure may probably enhance mothers' access to scarce resources and enabled them to use it.Couple motivation is very importance for education, self-reliance of married women, and male participation in reproductive health issues [37].
Others religion followers were found to be less likely to use modern contraceptive methods in Ethiopia.This result is consistent with other study done in Zambia [38].The possible reason might be due to the fact that religion might have similar socio-cultural importance in influencing the life of women in Ethiopia.Especially, the introduction of some family planning teachings in religiously conservative societies might be disadvantageous [39].
In our study, it was found that, women having living children between 1 and 4, compared with women having no children more likely to use modern contraceptives.This finding is comparable with study report in India [40].As the number of children increased, women tend to use contraceptive as their desired number of children will be met, while nuliparous women might have had no idea about the use of modern contraceptives [30].
This study found a significant relationship between fertility preference and modern contraceptives utilization.Women who had no desire for more children were more likely to utilize modern contraceptive than those women who want other children.This finding is in lined with studies conducted in Rwanda [41].The possible explanation could be that women who have no desire for extra children use modern contraception method to achieve their desires.Since modern contraceptive method is safe, cheap and long term in preventing unwanted pregnancy, it is a method of choice by couples who needs to completely delay childbirth [42].
There is variation of modern contraception utilization across regions and city administrations.Being resident of Amhara region increases modern contraceptive utilization.This variation is confirmed by DHSs report which is conducted every 5years since 2000.Over 16years, between 2000 and 2016, the Amhara region showed an increase in the utilization of modern contraceptives.The large increase in the use of modern contraceptives in the Amhara region might be related to the high number of family planning organizations and the regional government focus on this region [12].Women from rural settlements are less likely to utilize modern contraception than those from urban settlements.This finding is supported by studies done in Uganda [43].The possible reason might be low availability and accessibility of the healthcare facilities, trained health care provider, and family planning resources in rural part of Ethiopia.Especially, in pastoral community the problem is more severe in addition to lack of awareness [44].In this study, high community wealth negatively affects the modern contraceptive use in Ethiopia.Modern contraceptives are available free of charge, the contribution of community wealth for modern contraceptive utilization will not be explained by the ability to pay for the service.Rather it reflects the general socioeconomic position of the community [45].The reason might be economically matured communities are getting modern contraceptive services from dispensary rather than regular MCH department, this results under report of modern contraception among high economic class [19].
Strength and limitation
Large sample sizes and the use of nationally representative data may have improved our ability to estimate the parameters.To account for the hierarchical structure of the data from the EDHS, the study also used a multilevel analysis.Similar to this, it was crucial to perform spatial analysis and GWR to determine the geographic variation and, separately, the predictors of modern contraceptive use.Policy makers will benefit from this study's assistance in developing or improving intervention methods based on the identified spatial variances.Cross-sectional data were employed in this analysis, which limits the inferences about the causes of the factors on the dependent variable.Additionally, several variables were not taken into consideration since they were not present in the EDHS data set.
Conclusion and recommendation
There is a significant spatial variation of factors affecting modern contraceptive use across regions in Ethiopia.Therefore, public health interventions targeting areas with low modern contraceptive utilization will help to increase modern contraception use considering significant factors at individual and community levels.Locations with low modern contraceptive use and its predictors could assist program planners and decision-makers to design targeted public health interventions.
Government of Ethiopia must develop more geographic targeted strategies for improving socioeconomic status of women and availability & accessibility of health facilities in rural areas of the countries.This will not only increase modern contraceptive provision, but will also reduce teenage pregnancy and birth, and in turn, contribute to the achievement of the Sustainable Development Goal three.Improving modern contraception use among reproductive age group will also require connecting women with information and services during their routine health service visits and taking advantage of missed opportunities for contact with the health facility.
Table 1 . Socio-demographic characteristics of women, EDHS 2016. Individual level characteristics of respondents in Ethiopia, 2016 (n = 8,673)
*Other: Catholic and traditional https://doi.org/10.1371/journal.pone.0288710.t001relative risk (RR) of 1.53 and Log-likelihood ratio (LLR) of 114.68, at p-value <0.01.The secondary clusters (yellow rings) located in the Western part of Oromia.The third one is located in the Southwestern part of SNNPR.The second clusters spatial window was centered at 6.721839 N, 38.294189 E with 54.02 km radius, with a relative risk (RR) of 1.80 and Log-Likelihood ratio (LLR) of 59.91, at p-value <0.01.The third clusters (red rings) spatial window was centered at 7.645646 N, 35.353485 E with 66.80 km radius, with a relative risk (RR) of 1.74 and Log-Likelihood ratio (LLR) of 21.08, at p-value <0.01 (Fig 6) | 2023-12-02T05:08:41.276Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "d6eb2638b0472310410c7123bfcf48c7441c10d1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d6eb2638b0472310410c7123bfcf48c7441c10d1",
"s2fieldsofstudy": [
"Geography",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202541494 | pes2o/s2orc | v3-fos-license | LOFAR Measures the Hotspot Advance Speed of the High-Redshift Blazar S5 0836+710
Our goal is to study the termination of an AGN jet in the young universe and to deduce physical parameters of the jet and the intergalactic medium. We use LOFAR to image the long-wavelength radio emission of the high-redshift blazar S5 0836+710 on arcsecond scales between 120 MHz and 160 MHz. The LOFAR image shows a compact unresolved core and a resolved emission region about 1.5 arcsec to the southwest of the radio core. This structure is in general agreement with previous higher-frequency radio observations with MERLIN and the VLA. The southern component shows a moderately steep spectrum with a spectral index of about $\gtrsim -1$ while the spectral index of the core is flat to slightly inverted. In addition, we detect for the first time a resolved steep-spectrum halo with a spectral index of about $-1$ surrounding the core. The arcsecond-scale radio structure of S5 0836+710 can be understood as an FR II-like radio galaxy observed at a small viewing angle. The southern component can be interpreted as the region of the approaching jet's terminal hotspot and the halo-like diffuse component near the core can be interpreted as the counter-hotspot region. From the differential Doppler boosting of both features, we can derive the hotspot advance speed to $(0.01-0.036)$ c. At a constant advance speed, the derived age of the source would exceed the total lifetime of such a powerful FR II-like radio galaxy substantially. Thus, the hotspot advance speed must have been higher in the past in agreement with a scenario in which the originally highly relativistic jet has lost collimation due to the growth of instabilities and has transformed into an only mildly relativistic flow. Our data suggest that the density of the intergalactic medium around this distant ($z=2.22$) AGN could be substantially higher than the values typically found in less distant FR II radio galaxies.
Introduction
Radio-loud active galactic nuclei (AGN) can eject powerful double-sided relativistic jets (Blandford et al. 2018) into the intracluster medium that emit synchrotron emission and can reach distances of Megaparsecs. The most powerful of these sources exhibit so-called FR II morphologies (Fanaroff & Riley 1974) in which the jets are terminated in high surface-brightness regions called hotspots, where the jets interact with the surrounding medium. FR II radio galaxies have been studied extensively at centimeter wavelength with the Very Large Array (VLA; e.g., O'Dea et al. 2009), estimating ages, velocities, magnetic fields, total lifetime, ambient gas densities, and other quantities.
Blazars are a subclass of AGN, whose jets are aligned at a small angle to the line of sight towards Earth. They can be classified into BL Lac objects and flat-spectrum radio quasars (FSRQs). According to the AGN unified scheme (Antonucci 1993;Urry & Padovani 1995), FSRQs are the beamed counterparts of FR II radio galaxies. Because of relativistic bulk motion of plasma at small inclination angles, the compact (i.e., parsec scale) emission of blazar jets gets drastically Doppler boosted and can be observed out to very high redshifts.
According to synchrotron theory, electrons with Lorentz factor γ emit at frequencies ν ∼ 10 −6 γ 2 B 2 GHz (with B in mG). High radio-frequency observations thus typically probe emission from electrons at γ > 1000. The emitted spectrum of large-scale components associated with AGN jets (unbeamed lobe emission and moderately beamed hotspot emission) can be described by a power law, F ν ∝ ν α , with the spectral index α typically in the range −0.5 to −1), while the beamed emission from the central jet in blazars is typically characterized by flat spectral indices α ∼ 0. Consequently, blazars have been studied extensively at high radio frequencies, where the jet dominates and Very-Long-Baseline Interferometry (VLBI) techniques offer unprecedented angular resolution of the inner jet region (Zensus 1997), while the low-frequency properties of blazar lobe-emission have received less attention in the past decades.
Several further constraints affect the study of large-scale blazar observational data. Due to the strong projection effects at the small inclination angles involved, the emission of the two Article number, page 1 of 9 arXiv:1909.02412v1 [astro-ph.HE] 5 Sep 2019 A&A proofs: manuscript no. paper_main lobes associated with the jet and counterjet can blend. Moreover, the hotspots of jet and counterjet are subject to noticable differential Doppler boosting due to the mildly relativistic advance speeds of hotspots in the intergalactic medium (O'Dea et al. 2009). Light-travel time differences between both jets can affect the observed arm ratios and cause differential aging of hotspot and counter-hotspot.
These problems can partially be overcome with highresolution observations at long observing wavelengths as provided by LOFAR (van Haarlem et al. 2013), which offers unprecedented sensitivity at 40 − 240 MHz and an angular resolution greatly improved over previous instruments. The lowenergy electron population (γ < 1000) responsible for the blazar-lobe emission can give us unique new insights into the large-scale structure of blazars and therefore a probe of the oldest observable structures in these powerful sources. In particular, while the counterjets of blazars are typically strongly debeamed and therefore unobservable, the lobe and hotspot emission associated with these counterjets are expected to be less strongly debeamed. Due to their steep radio spectrum and small projected scales, they can be detectable in LOFAR observations, while having remained undetected in previous higher-frequency and/or lower angular-resolution observations.
The powerful high-redshift (z = 2.218) blazar S5 0836+710 has been observed with the VLA by Cooper et al. (2007) at 1.4 GHz. At this moderately low frequency, the VLA (in A configuration) did not resolve the kilo-parsec-scale structure of the source. Higher-frequency VLA observations (e.g., O'Dea et al. 1988) and observations with MERLIN at 1.6 GHz (Hummel et al. 1992) have shown a single extended and polarized emission feature about 1.5 arcsec south of the jet core without any visible emission bridge between it and the core and without any apparent counterpart on the other side of the core. Perucho et al. (2012a) and Perucho et al. (2012b) suggested that the jet in S5 0836+710 is subject to the development of Kelvin-Helmholtz (KH) instability (Perucho et al. 2012a) and that this instability could be the cause of jet disruption and the generation of a decollimated radio structure at arcsecond scales (Perucho et al. 2012b), explaining the prominent extended feature observed by O'Dea et al. (1988) and Hummel et al. (1992). However, Perucho et al. (2012b) pointed out that the disruption site should be associated with intense dissipation of kinetic energy, which is not observed at any point between the inner jet and the putative relic feature. Another problem in the jet-disruption scenario is related to the one-sided kiloparsec-scale morphology because no corresponding relic or lobe associated with the counterjet can be observed. In this paper we present new results from LOFAR observations that solve these problems. The overall extended arcsecond-scale structure can be interpreted as a classical but strongly projected double-sided source morphology in which the southern feature is a hotspot associated with the approaching jet rather than a disrupted jet relic.
The following sections are structured as follows: In Sect. 2 we give observational parameters of the LOFAR observation of S5 0836+710 and describe the data reduction. The resulting images and derived quantities are presented in Sect. 3. Section 4 presents a discussion of the observational data and implications. Throughout the paper, we use the following cosmological parameters: H 0 = 71 km /s·Mpc, Ω m = 0.27 and Ω Λ = 0.73.
Observation and data reduction
We observed S5 0836+710 on June 17, 2015, with the LOFAR High Band Antenna (HBA) array. The observation time was 4 hours covering the full frequency range between 117.5 MHz to 162.6 MHz. 3C 196, which was used as the primary fluxdensity calibrator, was observed for 10 minutes at the beginning of the observing run. Data were recorded in 8-bit mode in 231 subbands with a bandwidth of 192 kHz each and averaged into 14 frequency bands, each with 3.12 MHz bandwidth, and correlated with the COBALT (COrrelator and Beamforming Application platform for the Lofar Telescope) correlator (Broekema et al. 2018). Given the known highly compact structure of S5 0836+710, we used only the international LOFAR stations 1 and analyzed the data using standard methods of Very-Long-Baseline Interferometry (e.g., Moran & Dhawan 1995) using the AIPS (Astronomical Image Processing System; Greisen 2003) package. In this process, the data were averaged over 16 seconds in time so that the field of view contained only the target source. The resulting (uv)-coverage is shown in Fig. 1 and the measured visibilities as a function of (u,v)-radius are shown in Fig. 2. Images were created using self-calibration techniques using difmap (Shepherd 1997) directly on target, which was possible because of the bright and compact core emission present in S5 0836+710. Four frequency bands had to be discarded due to insufficient data quality, presumably due to the bandpass shape and radio-frequency interference (RFI). The total flux density picked up in the full image was calculated and used to compute a correction factor by comparing to the total flux density of the flux calibrator known from a low-resolution image made with the calibrated LOFAR core stations. This factor was applied to the high-resolution model and the visibilities between the international stations were self-calibrated with this corrected model. This process led to fully calibrated LOFAR VLBI images.
Results
3 shows a stacked image of the 11 bands (see online material for the images of the individual bands) corresponding to a central frequency of 143 MHz and a bandwidth of 34 MHz.
The general structure is in agreement with previous higherfrequency observations of S5 0836+710 on comparable scales and angular resolution (see especially Fig. 5 and Fig. 1 in Hummel et al. 1992;Perucho et al. 2012b, respectively). The source shows a compact unresolved core and a resolved emission region between 1 and 2 arcsec to the southwest of the radio core. The core is known to contain a southward-directed compact VLBI jet with an extent of about 200 mas or 1.5 kpc (Perucho et al. 2012a), that shows signs of growing instabilities with distance downstream. These were thought to lead to a full disruption of the jet before it is able to reach arcsecond scales (Perucho et al. 2012b). In this scenario, the southern component was interpreted as a subrelativistic relic of the disrupted jet that continues propagating downstream and interacting with the intergalactic medium. Such features are generally expected to show steep spectral indices 2 with −2 < α < −1 (Pandey-Pommier et al. 2016).
To test this scenario, we created a spectral-index image using the 11 individual frequency bands and fitting a power-law to each individual pixel (see Fig. 4). The southern component shows a moderately steep spectrum with a spectral index of about −1 while the spectral index of the core is flat to slightly inverted. An additional striking feature of the LOFAR spectral-index image is a resolved steep-spectrum halo with a spectral index of about -1 surrounding the core. This halo has not been seen in previous higher-frequency images of S5 0836+710 and is only revealed by the good sensitivity and high angular resolution of LOFAR in the sub-GHz regime.
For comparison, we have produced a second spectral index map between the LOFAR band at 138 MHz and 1.6 GHz (obtained from the MERLIN observation on March 1, 2008; see Hummel et al. 1992), which is shown in Fig. 5. The core-spectral index is affected by source variability between the LOFAR (2015) and the MERLIN (2008) observation but shows a roughly flat spectrum. The steep-spectrum halo is marginally visible and its spectral slope is consistent with the LOFAR-only spectral-index image (Fig. 4). The southern component shows a spectral index of about −0.7 between 138 MHz and 1.6 GHz and is therefore also consistent with the LOFAR-only data within the given accuracy.
Both spectral-index images consistently suggest that LOFAR resolves the central emission region into a core-halo structure.
To test this, we model-fitted the LOFAR visibility data in this region with two superimposed Gaussian components in the im-
Discussion
In this section, we provide circumstantial evidence which shows that the large-scale morphology of S5 0836+710 can be understood as an classical FR II-like radio galaxy seen at a small inclination angle. In this model, the southern component can be understood as a face-on hotspot, hotspot relic, or hotspot-lobe structure of the approaching jet and the halo can be associated with emission of the hotspot and/or lobe on the counter jet side. We use the observational parameters to derive jet parameters and constrain the density of the intracluster medium surrounding the radio source S5 0836+710.
Interpretation of the southern emission region as a face-on hotspot
The spectral index of the southern component in S5 0836+710 is only moderately steep and does not reach values expected from a bona-fide radio relic of a disrupted jet as seen in other sources (Pandey-Pommier et al. 2016). The magnetic field in this region is circumferential at the south-eastern edge (O'Dea et al. 1988) as it is typical for quasar hotspots (Swarup et al. 1984). The field seems to be aligned with the western region of the large-scale structure. Altogether, it can be interpreted in this respect, as an emission region associated to a hotspot plus a lobe, in which the field is aligned along the shock in the west, as observed in other FRII sources (Kharb et al. 2008). We therefore measured the size of the southern component by model fitting a Gaussian component to the LOFAR data in all 11 bands to test whether its extent is consistent with an active FR II hotspot region seen face on. The full extent of the emission region is about 9 kpc with an average flux density of (1.4±0.3) Jy, translating to an intrinsic luminosity of (5 ± 1) × 10 29 W Hz −1 . At a distance of 17.88 Mpc, this is indeed on a scale of typical hotspot diameters in powerful FR II radio galaxies (Jeyakumar & Saikia 2000;Perucho & Martí 2003;Kawakatu & Kino 2006). Alternatively, it is possible to find a model representation where only the brightest peak of the southern component is modeled with a Gaussian of about 5 kpc in diameter and a flux density of (1.2±0.2) Jy (while residual emission on somewhat larger scales can be represented either by an additional wider Gaussian or by a hybrid model invoking a distribution of CLEAN components). This latter representation would model a physical scenario of a hotspot surrounded by a lobe. Also in this model representation, the size of the high surface-brightness feature is still consistent with typical sizes of hotspots in FR II radio galaxies.
The 'unusual' irregular morphology of the putative hotspot might indeed just be an effect of the high angular resolution and the small inclination angle at which the system is observed. If, as suggested by Perucho et al. (2012b), this southern emission component does represent the relic of the hotspot, after the jet has been transformed into a subrelativistic or mildly relativistic broad flow, then the loss of collimation must have taken place fairly close upstream of the terminal feature, because it obviously has not expanded substantially since then.
Interpretation of the source morphology as an FR II-like radio galaxy at a small viewing angle
The kiloparsec-scale structure of S5 0836+710 is consistent with a double-sided source, reminiscent of a highly projected radiogalaxy image onto which a strongly beamed unresolved core component is superimposed. In this interpretation, the southern diffuse component can be interpreted to be associated with the hotspot region of the approaching jet and the halo-like diffuse component near the core can be interpreted as the counterhotspot region. Because the distance to the core is larger for the hotspot than it is for the counter-hotspot, the system cannot be fully symmetric. However, at small inclination angles, intrinsi-cally small bends or misalignment angles can be increased to substantially larger apparent offsets in projection. A possible geometry of the system is shown in Fig. 7.
x I X i r L / 8 l 7 Y u 6 5 9 a 9 + 8 t a 4 6 a I o w w n c A r n 4 M E x I X i r L / 8 l 7 Y u 6 5 9 a 9 + 8 t a 4 6 a I o w w n c A r n 4 M E ' < l a t e x i t s h a 1 _ b a s e 6 4 = " x k Z t U m e Y s 7 K R A / 8 W k p 9 r F t 5 R G 2 0 = " > A A A B 7 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K o M e g F 4 8 R z A O S J c x O e p M h s 7 P D z G w g h H y E F w + K e P V 7 v P k 3 T p I 9 a G J B Q 1 H V T X d X p A Q 3 1 v e / v c L G 5 t b 2 T n G 3 t L d / c H h U P j 5 p m j T T D B s s F a l u R 9 S g 4 B I b l l u B b a W R J p H A V j S 6 n / u t M W r D U / l k J w r D h A 4 k j z m j 1 k m t 7 p h q N e S 9 c s W v + g u Q d R L k p A I 5 6 r 3 y V 7 e f s i x B a Z m g x n Q C X 9 l w S r X l T O C s 1 M 0 M K s p G d I A d R y V N 0 I T T x b k z c u G U P o l T 7 U p a s l B / T 0 x p Y s w k i V x n Q u 3 Q r H p z 8 T + v k 9 n 4 N p x y q T K L k i 0 X x Z k g N i X z 3 0 m f a 2 R W T B y h T H N 3 K 2 F D q i m z L q G S C y F Y f X m d N K + q g V 8 N H q 8 r t b s 8 j i K c w T l c Q g A 3 U I M H q E M D G I z g G V 7 h z V P e i / f u f S x b C 1 4 + c w p / 4 H 3 + A H t K j 6 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " x k Z t U m e Y s 7 K R A / 8 W k p 9 r F t 5 R G 2 0 = " > A A A B 7 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K o M e g F 4 8 R z A O S J c x O e p M h s 7 P D z G w g h H y E F w + K e P V 7 v P k 3 T p I 9 a G J B Q 1 H V T X d X p A Q 3 1 v e / v c L G 5 t b 2 T n G 3 t L d / c H h U P j 5 p m j T T D B s s F a l u R 9 S g 4 B I b l l u B b a W R J p H A V j S 6 n / u t M W r D U / l k J w r D h A 4 k j z m j 1 k m t 7 p h q N e S 9 c s W v + g u Q d R L k p A I 5 6 r 3 y V 7 e f s i x B a Z m g x n Q C X 9 l w S r X l T O C s 1 M 0 M K s p G d I A d R y V N 0 I T T x b k z c u G U P o l T 7 U p a s l B / T 0 x p Y s w k i V x n Q u 3 Q r H p z 8 T + v k 9 n 4 N p x y q T K L k i 0 X x Z k g N i X z 3 0 m f a 2 R W T B y h T H N 3 K 2 F D q i m z L q G S C y F Y f X m d N K + q g V 8 N H q 8 r t b s 8 j i K c w T l c Q g A 3 U I M H q E M D G I z g G V 7 h z V P e i / f u f S x b C 1 4 + c w p / 4 H 3 + A H t K j 6 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " x k Z t U m e Y s 7 K R A / 8 W k p 9 r F t 5 R G 2 0 = " > A A A B 7 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K o M e g F 4 8 R z A O S J c x O e p M h s 7 P D z G w g h H y E F w + K e P V 7 v P k 3 T p I 9 a G J B Q 1 H V T X d X p A Q 3 1 v e / v c L G 5 t b 2 T n G 3 t L d / c H h U P j 5 p m j T T D B s s F a l u R 9 S g 4 B I b l l u B b a W R J p H A V j S 6 n / u t M W r D U / l k J w r D h A 4 k j z m j 1 k m t 7 p h q N e S 9 c s W v + g u Q d R L k p A I 5 6 r 3 y V 7 e f s i x B a Z m g x n Q C X 9 l w S r X l T O C s 1 M 0 M K s p G d I A d R y V N 0 I T T x b k z c u G U P o l T 7 U p a s l B / T 0 x p Y s w k i V x n Q u 3 Q r H p z 8 T + v k 9 n 4 N p x y q T K L k i 0 X x Z k g N i X z 3 0 m f a 2 R W T B y h T H N 3 K 2 F D q i m z L q G S C y F Y f X m d N K + q g V 8 N H q 8 r t b s 8 j i K c w T l c Q g A 3 U I M H q E M D G I z g G V 7 h z V P e i / f u f S x b C 1 4 + c w p / 4 H 3 + A H t K j 6 Y = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " x k Z t U m e Y s 7 K R A / 8 W k p 9 r F t 5 R G 2 0 = " > A A A B 7 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K o M e g F 4 8 R z A O S J c x O e p M h s 7 P D z G w g h H y E F w + K e P V 7 v P k 3 T p I 9 a G J B Q 1 H V T X d X p A Q 3 1 v e / v c L G 5 t b 2 T n G 3 t L d / c H h U P j 5 p m j T T D B s s F a l u R 9 S g 4 B I b l l u B b a W R J p H A V j S 6 n / u t M W r D U / l k J w r D h A 4 k j z m j 1 k m t 7 p h q N e S 9 c s W v + g u Q d R L k p A I 5 6 r 3 y V 7 e f s i x B a Z m g x n Q C X 9 l w S r X l T O C s 1 M 0 M K s p G d I A d R y V N 0 I T T x b k z c u G U P o l T 7 U p a s l B / T 0 x p Y s w k i V x n Q u 3 Q r H p z 8 T + v k 9 n 4 N p x y q T K L k i 0 X x Z k g N i X z 3 0 m f a 2 R W T B y h T H N 3 K 2 F D q i m z L q G S C y F Y f X m d N K + q g V 8 N H q 8 r t b s 8 j i K c w T l c Q g A 3 U I M H q E M D G I z g G V 7 h z V P e i / f u f S x b C 1 4 + c w p / 4 H 3 + A H t K j 6 Y = < / l a t e x i t > x K m h f V w K 8 G 9 5 e V 2 k 0 e R 5 G c k F N y T g J y R W r k j t R J g 3 D y S J 7 J K 3 n z t P f i v X s f i 9 a C l 8 8 c k z / w P n 8 A o / e P K A = = < / l a t e x i t > Fig. 7. Model of the source geometry in S5 0836+710 as an FR II radio galaxy observed at a small inclination angle θ. HS denotes the hotspot region closer to the observer, and CHS denotes the counter-hotspot region. The different arm lengths are due to the different light travel times (see Appendix A). The two jets are misaligned by ϕ from a straight jet/counter-jet axis.
Interpreting the southern emission feature as the hotspot region and the halo-component near the core as the counterhotspot region we calculate the brightness ratio of the two regions to be F h /F ch = (1.19 ± 0.11). This brightness ratio can be used to constrain the parameter space for inclination angle, misalignment and the advance speed β h of the jet head, which we assume to be the same for both hotspots. Advance speeds in FR II radio galaxies are generally assumed to be mildly relativistic with values of up to 0.1c to 0.5c (O'Dea et al. 2009), depending on the deviation from the minimum-energy conditions. Under a small inclination angle, such speeds can lead to notable differential boosting effects. In this framework, we thus expect where θ is the inclination angle under which we observe the approaching jet and ϕ is the misalignment angle of the counterjet with respect to the approaching jet (see Fig. 7). With the measured spectral index of α = −0.7, the measured brightness ratio of the two hotspot regions, and a viewing angle of θ = 3.2 • , as estimated by Pushkarev et al. (2009), this relation constrains the allowed parameter space as seen in Fig. 8. For a given flux ratio, the resulting head advance speed does depend only weakly on the geometry, Considering the uncertainty range of the flux-ratio, the advance speed is constrained to the range of 0.010 c to 0.036 c. This is comparable to (albeit on the low end of the distribution of) source advance velocities of distant high-power FR II radio galaxies (O'Dea et al. 2009).
Hotspot advance speeds of active FR II radio galaxies have been commonly assumed to be roughly constant over the lifetime of a source (O'Dea et al. 2009). In that case, the measured advance speed for S5 0836+710 would imply a source age of 2 × 10 7 years to 8 × 10 8 years, which is exceeding the maximum total source lifetime of such a powerful source by a factor of 2 to 80 (e.g., O'Dea et al. 2009;Perucho et al. 2019). Thus, the hotspot advance speed in S5 0836+710 must have been somewhat higher in the past, in agreement with a scenario in which the originally highly relativistic jet has lost collimation due to the growth of instabilities and has transformed into an only mildly relativistic flow, as suggested by Perucho et al. (2012b).
The hotspot region was modeled with a single Gaussian component for each band. Averaging all bands, we measure an apparent opening angle (core to hotspot) of (25 ± 2) • . From that, it can be derived that the inclination angle is unlikely to be much larger than 15 • , because that would imply an intrinsic opening angle of 7 • . On the other hand, it is highly improbable that the inclination angle is much smaller than about 1.5 • because otherwise the total deprojected source size of > 1 Mpc would be larger than the maximum known sizes of radio sources (Jeyakumar & Saikia 2000). At the preferred inclination angle of θ = 3.2 • (Pushkarev et al. 2009), the measured apparent opening angle implies an intrinsic opening angle of about 1 • , which is consistent with the conclusions of Hummel et al. (1992).
An independent additional constraint on the inclination and the misalignment angles follows from the simple geometric argument that we see the hotspot region at a distance of about 1.5 arcsec from the core and the counter-hotspot region is located within about 0.5 arcsec from the core (because at that distance the core and the counter-hotspot start to merge into a blended feature at the beamsize of our LOFAR image). We thus require a misalignment angle ϕ 2.5 • . Fig. 8. ϕ lines show the misalignment needed to explain the measured flux ratio with respect to a certain β. β values are represented by the drawn arcs, and increase from 0.02339 (leftmost arc) up to 0.02348 (rightmost arc) in increments of 3.4 · 10 −6 . ϕ is the minimum needed, and ϕ max is the maximum possible misalignment angle resulting from the geometric argument (see text). The confined green area in between is therefore the possible misalignment in order to blend the counterhotspot region by the bright core.
The misalignment ϕ needed to explain both the observed morphology and the hotspot regions brightness ratio is thus in the range 2.5 • to 5 • . Within the range of typical inclinations of blazars towards earth, this leaves a relatively large parameter space (see Fig. 8), that is fully consistent with expectations for the special case of S5 0836+714. Such values are indeed common among powerful FR II radio galaxies. For example, misalignment angles in 3C sources are known to range up to values of about 12 • or more (Leahy & Williams 1984). We thus consider the interpretation of the S5 0836+710 large-scale morphology as an FR II-like radio galaxy at a small viewing angle as realistic.
Derivation of the jet parameters
The total jet power for a relativistic jet is parametrized as (see, e.g. Perucho et al. 2017): where h j = c 2 + γ j P j (γ j −1)ρ j is the jet specific enthalpy, Γ j is the jet Lorentz factor, ρ j is the jet rest mass density, v j is the jet velocity, c is the speed of light, B φ is the toroidal field in the observer's frame, γ j is the ratio of specific heats of the jet gas, and A j is the jet cross-section. The first term in Eq. 2 includes the kinetic, internal, and restmass energy contributions, while the second term stands for the magnetic energy of the jet.
We can assume (see Appendix B) that the jet is kinetically dominated and in the cold regime (i.e., its magnetosonic Mach number is high). Given also the fact that the advance speed as measured by LOFAR is so slow, the velocity of the bulk plasma flowing into the hotspot can be assumed to be only mildly relativistic at most (see also Appendix C for the relativistic derivation). Under these conditions, the magnetic and pressure terms can be neglected and Eq. 2 simplifies to Across a strong shock, the hotspot pressure is (e.g., Landau & Lifshitz 1987) In this equation, we need to define the jet radius at the hotspot, R j,h , which can be approximated as the hotspot radius. Because we do not know whether the southern radio-structure includes the hot-spot and part of the lobe, or it is the hot-spot, we will consider both half of the whole region size as the jet radius (4.5 kpc), and half the size of the fitted component, which represents the brightest region within the hypothetical lobe (2.5 kpc). Although the polarization seems to favour the latter interpretation (see the previous section), we study both cases here. Furthermore, we need an estimate for v j at the hotspot, v j,h . As we will show in the following paragraphs, any reasonable input value is sufficient because we define an iterative method to derive its value of convergence. Once the hotspot pressure is obtained, we can use equipartition between the non-thermal particles and the magnetic field, as reported for FR II hotspots (see e.g., Hardcastle & Worrall 2000), to obtain a value of the magnetic field at the interaction site.
The value of the magnetic field prior to the reverse shock B φ j,h can be constrained by assuming conservation of the magnetic flux from the 1.6 GHz jet to the interaction site 3 . Applying the magnetic field strength h = 0.01 h = 0.036 Fig. 9. The minimum energy assumption, see Pyrzas et al. (2015), yields magnetic field strengths in the range 0.7 ≤ B mG ≤ 1.8 within the southern hotspot.
MHD jump conditions at the reverse shock giving rise to the hotspot: allows us to derive a new estimate for v j,h . Setting a convergence criterion for this parameter at a precision of 10 −3 , we can find the relevant parameters of the problem. We have applied this method to four different sets of hotspot velocity and radius, namely, the possible combinations of v h = 0.01 − 0.036 c, and R j,h = 2.5 − 4.5 kpc. Table 1 shows the resulting values. The ranges given for the parameters correspond to the values derived for r h = 4.5 and 2.5 kpc, with the smaller numbers of pressure, magnetic field and density corresponding to the wider hotspot.
We find that the jet velocity at the interaction site is substantially smaller than that of the VLBI jet. This can be explained in terms of kinetic energy dissipation by the growth of the KH modes and/or integrated entrainment along the jet (e.g., Perucho et al. 2012a,b). The extremal value of β j,h = 0.54, derived for β h = 0.036 provides an a-posteriori justification of our assumption of a mildly relativistic flow because possible relativistics corrections are limited by the Lorentz factor Γ j,h 1.2.
Our spectral analysis provides us with values for the magnetic field, from basic synchrotron theory, that are displayed in Fig. 9 in terms of the minimum electron Lorentz factor, by using Eq. (6) in Pyrzas et al. (2015). The equipartition magnetic field derived for the hotspot region results in a value of γ min ≤ 100, which represents a plausible number (Meisenheimer et al. 1989;Meisenheimer et al. 1997).
If we compare the parameters in Table 1 to those obtained by Meisenheimer et al. (1989) for different classical FRII hotspots using spectral analysis, we see that our hotspot region values for pressure and magnetic fields are around or slightly above their maximum values (B h ∼ 0.1 − 1 mG, P h ∼ 0.1 − 1 × 10 −8 dyn/cm 2 in their case). Taking into account that S5 0836+710 is a powerful jet probably interacting with a dense ICM, we can conclude that the parameters derived by us are in agreement with the typical ones that Meisenheimer et al. (1989) obtained using a different method.
From the derived parameters, we can make a step further and estimate the jet density prior to the reverse shock by using Eq. (B.1). The results are also given in Table 1. The jet number density at the hotspot region lies in the range n j,h = 0.1−4.0 cm −3 A. Kappes et. al: LOFAR Measures the Hotspot Advance Speed of S5 0836+710 Table 1. Jet parameters at the hot-spot. The intervals in the parameters correspond to the values derived for r h = 4.5 and 2.5 kpc.
In the case of a leptonic jet, this implies a total particle number flux of N j,e = Γ j, h v j, h n j, h πR 2 j, h 1.2 − 5.5 × 10 54 pairs/s at the hotspot. This flux falls to 0.6 − 3.0 × 10 51 s −1 in the case of a proton dominated jet at these scales. A pair jet can, nevertheless, be discarded on the basis of energetic argumentation: taking into account that the jet is relativistic at VLBI scales, the energy flux in the form of rest-mass energy must have necessarily changed. As a conclusion, pollution by protons is required to happen along the jet and the jet is likely to be proton dominated on large scales.
Implications for the intracluster medium
From ram pressure confinement, we obtain a minimum value of ρ a = P h /v 2 h 1.5 × 10 −26 g cm −3 , and a maximum of 1.5 × 10 −24 g cm −3 for the possible ranges of R j and v h . The maximum derived (which corresponds to the smallest hotspot advance speed) would imply proton number densities of ∼ 1 cm −3 , of the order of interstellar medium values, and it is therefore unrealistically large for the intracluster medium at 240 kpc from the active nucleus. Increasing the hotspot advance speed up to 0.036 c can yield values as small as ∼ 0.01 cm −3 . The dependencies of the derived ambient density on the observational parameters thus favour hotspot advance speeds in the upper range of the interval given by brightness asymmetry. This result is, however, still one to two orders of magnitude above the values found by O'Dea et al. (2009). In that work, no clear trend of the density of the intraluster medium is found with redshift out to values of z 1.8. S5 0836+710 is located at z = 2.22 so that we are in principle probing the density of the intracluster medium at a somewhat earlier evolutionary stage of the expanding universe. However, the study of a single source does not allow us to draw any conclusions on possible systematic cosmological effects. S5 0836+710 might rather lie within a particular overdense cluster such as the local FR II radio galaxy Cygnus A. Nevertheless, our method can in principle be applied to large samples of high-power blazars and has the potential to reach out to even higher redshifts.
Conclusion
The LOFAR telescope provides unprecedented sensitivity and angular resolution in the 100 MHz regime. For compact sources with structure only on angular scales of arcseconds and smaller, the international LOFAR stations can be used effectively as a VLBI array. Further improvements in image fidelity can be achieved by inclusion of Dutch core-array stations (which can be obtained by improved calibration techniques) and in future observations by the new international stations in Poland, Ireland and Italy, yielding a maximum baseline of ∼ 1900 km. For the observation of blazars, the unique LOFAR capabilities are crucial because of the small angular scales involved and the beamed core emission, which dominates over the extended structure at higher frequencies. Future LOFAR studies of blazar samples will be able to address unsolved questions about the unification of blazars and radio galaxies, such as the occurance of FR II-typical morphological features in BL Lac objects (Cooper et al. 2007;Kharb et al. 2010).
In this paper, we have demonstrated that blazar observations with LOFAR can also be used to probe the intracluster medium out to cosmological distances. Our results suggest that the density of the intergalactic medium around the distant (z = 2.22) blazar S5 0836+710 might be substantially higher than the values found in less distant FR II radio galaxies. However, no generalized statement on a systematic redshift dependence can be derived from a single-source study as presented here, because systematic uncertainties and source-specific peculiarities have to be considered. Our method can be generalized and applied to larger numbers of suitable blazars to yield statistically relevant samples of the density of the intracluster medium as a function of redshift, which is independent and complementary to classical observational methods applied to radio galaxies (e.g., O'Dea et al. 2009). Because of their extreme power and beaming, blazars can be found at higher redshift than radio galaxies so that the method might eventually prove particularly important for studies of the young universe. The weakly beamed components of interest (hotspots, lobes), however, will be very challenging to detect at redshifts much larger than z = 2 and might have to await the advent of the Square Kilometre Array (SKA).
A. Kappes et. al: LOFAR Measures the Hotspot Advance Speed of S5 0836+710 | 2019-09-05T13:45:38.000Z | 2019-09-05T00:00:00.000 | {
"year": 2019,
"sha1": "41eb987eb45dfbc80963ddcbfd12c5e5de1cb6a1",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2019/11/aa36164-19.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bf064ff438dc12883ca860942ec89865fa732baa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219585206 | pes2o/s2orc | v3-fos-license | Learning to Learn in Tropical Forests: Training Field Teams in Adaptive Collaborative Management, Monitoring and Gender
SUMMARY From 2011–2015, the Center for International Forestry Research (CIFOR) trained field teams in Nicaragua in Adaptive Collaborative Management (ACM) methods. ACM is a social learning-based approach to help forest communities manage their natural resources in a more equitable and sustainable way and respond to change. This paper presents the lessons-learned from the training and field work. It argues that understanding and building social learning processes among the ACM team members and facilitators are crucial components of the ACM methodology and necessary in order to recognize and address the complex nature of socio-ecological relationships. In particular, promoting women's participation in forest decision-making in their own rural communities requires not only a consideration of gender relations but also of the gender perspectives of each member of the field team.
in doing so, it argues that social learning and adaptive collaborative approaches provide a framework and method to address the challenges of the complex environments of forest communities, permitting team members to examine their own perspectives and biases in order to better understand gender dynamics on the ground.
Social learning in theory
In the late 1990s, CIFOR scientists and others looked for participatory methods that could explicitly acknowledge how learning occurs in uncertainty (Lee 1993), and how marginalized groups can be included in the process (Colfer 2005a). Adaptive collaborative management took those ideas as well as other concepts and integrated them with participatory action research (PAR) approaches (German et al. 2012). Social learning is a core part of adaptive collaborative approaches (Berkes 2009). In social learning, reality is perceived as constructivist, in the sense that people make meaning and knowledge from their own experiences and interactions with others' experiences rather than a positivist perspective, where knowledge is derived from observation of the natural world (Rist et al. 2007). Approaching forest management as experimentation -by encouraging social learning -is one approach to addressing change and complexity. In fact, social learning can be considered necessary in complex environments such as the human-forest interface where uncertainty requires a concerted effort to build trust and create meaningful human interactions (Armitage et al. 2008), particularly on deeply personal issues such as gender relations.
The core mechanism of social learning is a process of "iterative reflection" that occurs when experiences, ideas and environments are shared with others (Keen et al. 2005). This reflexivity means reflecting on the learning, which then generates new learning (Keen et al. 2005). It is fundamental to the social learning process, and it is often characterized as occurring in cycles, such as Kolb's Learning Cycle (Kolb 2014), or loops (Colfer 2005a, Kolb et al. 1995, where conscious phases of reflection are interspersed among the information collection processes (Colfer 2005a). When feedback from the process causes actors to reflect on and change their initial assumptions, this is referred to as "double loop learning" (Maarleveld and Dabgbégnon 1999). When there is reflection on the process itself and on the conditions that structure interactions, this is referred to as triple loop learning, i.e., learning to learn (Maarleveld and Dabgbégnon 1999). Peschl (2007) describes triple loop learning as a shift in the frame of reference, bringing about change in the fundamental perspectives of the observer, or a "reframing" where the observer steps out of his or her experience and attempts to "look at a situation as whole in a reflective act" (Peschl 2007: 139). Reed et al. (2010) argue that social learning as a process must demonstrate that (1) there has been a change in understanding on the part of individuals; (2) this change extends beyond the individual to a wider community or unit; and (3) it occurs through social interactions via a social network. Vertical integration between at least two power levels must be INTRODUCTION This paper presents the lessons-learned from a) training field teams how to implement adaptive collaboration management (ACM) and b) their field work in indigenous communities in Nicaragua. It presents evidence that building social learning processes among the ACM team members and facilitators is necessary in order to recognize and address the complex nature of socio-ecological relationships, and social learning is crucial to efforts to improve understanding about gender in field teams working in forest communities. Furthermore, promoting women's participation in forest decision-making in their rural communities requires not only a consideration of gender relations but also of the gender perspectives of each member of the field team and how they interact with others' perspectives in forest communities.
Learning has been defined as a process of gaining understanding about the world (Fazey et al. 2005). A person's understanding of the world is a product of that person's relationship with the world; new experiences change the way a person acts and thinks (Fazey et al. 2005). The complex nature of human-forest relationships requires an explicit recognition of the uncertainty and unpredictability of outcomes, which requires frequent adjustment and adaptation as realities are revealed (Maarleveld and Dabgbégnon 1999) in a fluctuating space where complexity, uncertainty and non-equilibrium dynamics are the reality (Leach et al. 2007). Taking a reductionist approach -breaking problems down into small parts -can distort the complex interactions of systems (Maarleveld and Dabgbégnon 1999). Simplified interpretations can lead to "superstitious learning" where cause and effect are erroneously connected (Fazey et al. 2005). This is why understanding complex issues, such as gender, requires repeated, conscious efforts to observe and reflect upon what has been observed; only then can perspectives shift and deeper connections be revealed Learning about gender has its own complexities, because it involves understanding the set of socially-defined constructs -behaviours, tasks, responsibilities and relationships -that define interactions between men and women (Manfre and Rubin 2012) while explicitly acknowledging that the observer brings his or her own perceptions about gender to the process. While there has been research on including gender perspectives in research programs (Colfer 2013a, Mai et al. 2011, Manfre and Rubin 2012, Mutimukuru et al. 2006, there has been little discussion of how learning about gender occurs in research teams and how to create the conditions to promote that learning. Just as social learning lies at the core of ACM (Colfer 2005a, Lee 1993, Maarleveld and Dabgbégnon 1999, training must be approached as a social learning process, with experimentation and reflectivity at its core. By documenting perspectives towards communities, gender and learning by the field team over the course of five years, it was possible to explore how adaptive collaborative management approaches can create spaces for learning about gender and catalyse shifts in perceptions about gender. This article describes experiences in how groups learn about gender relations, and involved (Colfer 2005a), such as between community members and local experts. For example, Rist et al. (2007) argue that this interaction between experts and local people generates joint knowledge production that is crucial to improving the capacity of rural communities to define their own interests, acquire new knowledge and mobilize resources that can help them catalyse changes that are in line with their own vision and needs.
ACM is a collective problem-solving and natural resource management approach that focuses on learning from mistakes and successes to systematically adapt to change and improve management outcomes (Colfer 2005a). Specifically, ACM uses social learning, a process through which individuals work with others to observe, evaluate and decide upon actions together so that decisions about natural resource management can be more adaptive and collaborative. Social learning in ACM can be thought of as an iterative learning cycle that occurs through a process of planning, taking action, monitoring and reflecting on the process (Colfer 2005a). ACM seeks to create learning-oriented opportunities in uncertain conditions in order to adapt to change. In fact, adaptation has been characterized as learning in the facing of change and uncertainty (Ojha et al. 2013). ACM approaches see social learning as the starting point. Social learning in ACM occurs when various stakeholders reflect meaningfully and systematically together upon progress and results, often using information collected from monitoring. Transitioning to ACM requires applying diverse learning strategies that specifically address social-ecological feedback through experimental or experiential learning, and institutional arrangements must explicitly embrace reflection and innovation as part of the process (Armitage et al. 2008). Participatory monitoring, where groups collect and reflect upon information together, is one of the central learning strategies in ACM (Colfer 2005a, Cronkleton 2005, Mutimukuru et al. 2006. Knowledge is shared and valued among diverse groups through conscious, facilitated efforts to encourage groups to learn collectively to understand the impacts of their actions (Colfer 2013b) and is specifically oriented to "muddle" through complex systems and generate innovative solutions and thinking on aspects of resource governance and management (Wollenberg et al. 2004, Ojha et al. 2013. Nonetheless, "the learning way is easier said than done" (Ojha et al. 2013: 7), as researchers acknowledge that learning takes time, capacity and resources to build trust and relationships. However, there are encouraging examples of successes of social learning in creating diverse outcomes. For example, in Canada, the presence of government biologists in native communities was the key factor in trust building in the management of narwhals and led to social learning and collective action (Berkes 2009). In Nepal, social learning cycles resulted in empowerment (Hiyama & Keen 2004, Dangol 2005, and in rural India, Bolivia and Mali social learning improved communication among diverse power structures (Rist et al. 2007).
As seen above, however, most research on social learning focuses on the target communities, rather than on the learning of the field teams themselves. Working at multiple scales, and with diverse stakeholders, knowledge systems and perspectives are part of the landscape of working in forest management. This landscape is frequently no less complex within the field teams responsible for planning and implementing projects. Field teams involved in research and community development projects can reflect a diversity of perspectives and groups: they are often heterogeneous, composed of people with varying skills, experiences and disciplines, as well as cultural differences that may include people from various ethnic groups and even nationalities. Foresters are trained to think differently from sociologists; a city-based upbringing is distinct from someone raised in a rural community. It is almost inevitable that pecking orders exist. The same challenges that ACM projects encounter in communities often also exist within field teams: diverse knowledge systems, gender differences, hierarchies, conflict and power relations. Conscious efforts to learn together through reflection -i.e. social learning -are necessary and effective in order to transform this diversity into a team's strength. It can be difficult for an outsider to identify when his or her ideas or gender biases are "foreign" to the community. Furthermore, it can be a challenge for someone from an urban setting to put aside her sense that she is "above" immersing herself in the day-to-day activities of the community. Creating learning spaces may require special attention in a multicultural context, and participatory projects in particular need to be self-reflective of the development agendas, social values and dominant norms that are inevitably present (Hiyama and Keen 2004).
There are several studies that address social learning in research teams. Banjade (2013) discusses a learning and collaborative management initiative that helped to "undo" the traditional forestry mindset (Banjade 2013: 227), including the challenges of communicating concepts with local communities, and the tensions within the research team. Colfer (2013b) discusses the challenges of developing ACM research projects in a context where reductionist (hypothesis-testing) research was privileged and the challenges of training field teams in participatory action research. Colfer et al. (2011) discuss the challenges of training traditional research teams in participatory action research methods. Several publications are oriented towards training facilitators in ACM related methods (Evans et al. 2014, Nemarundwe et al. 2003, Wollenberg et al. 2000.
Less attention in the literature has been paid to learning how to do ACM, to the learning that occurs as a part of the methodological implementation of ACM -including learning how to adapt ACM to improve the participation of women -or to social learning about gender relations in local governance processes. Gender attitudes can be frustratingly "sticky", in the sense that underlying frames of reference can be difficult to shift. Specifically, this paper seeks to understand what kind of learning is involved in the shifting of frames of reference, or in other words, how can attitudes towards gender be changed.
This paper argues that navigating the complexity of the socio-ecological space requires engagement in multi-loop learning, where research teams reflect on their methodologies and consciously adapt methods and activities in concert with or abroad that the majority of households are led by single mothers (Muller, personal communication, 2014). The area is also vulnerable to natural disasters: in 2007 Hurricane Felix destroyed the crops of 25,000 families and affected wide swaths of the forest, resulting in the destruction of an estimated 562,000 hectares of tree cover (FAO 2007).
The project selected ACM as the primary participatory field method, as ACM has demonstrated the potential to create new spaces for women to participate in other forest community contexts (Colfer 2005b, Kusumanto 2007. While ACM was not specifically chosen because of its explicit acknowledgement of change and uncertainty, nonetheless, the adaptive nature of the methodology became an integral asset in such a complex environment. Since the project team had little experience with ACM, an ACM consultant (the lead author, Evans) was hired to provide training and follow-up to the field team in ACM concepts such as social learning, participation and action research and in methods including future scenarios, participatory monitoring and governance monitoring. The trainer was from the United States and had experiences with ACM in other Latin American countries, but no experience in Nicaragua. Hence, learning by both the trainer and field team was an essential part of the process. While initially the trainer was focused on preparing the field team in ACM, it became clear that the training process was creating a rich space for experimentation about social learning, and thus observations of the process and discussions and reflections by the field team were documented and served as the basis for this article.
The field team consisted of indigenous professionals of the same ethnicity as the community members: Miskitu and Mayangna. One team member had grown up in an indigenous community and the other two had family ties to communities. They had diverse professional backgrounds -forestry, sociology and education -with connections and work experience with local partners. Nine communities were selected for the project; due to logistical constraints, six communities continued into a second phase 4 .
The ACM training included workshops, remote mentoring and field mentoring. The purpose of the first ACM training workshop in 2011 was to introduce local researchers and local community partners to ACM and train them in how to initiate ACM-based activities related to the project objectives in the nine communities selected to participate in the project. The field team used scenarios planning to help communities identify ACM-based activities to be carried out in each community. The ACM activities included planting trees, starting community gardens, organizing a carpentry workshop and strengthening community governance. Additional NGO, state and donor agency partners participated in part of the training to learn about ACM. The workshop provided learning opportunities with hands-on activities so that participants could the learning and knowledge creation generated in their field work. Creating reflexive, adaptive research requires a flexibility and agility to contextualize methodologies that may make many researchers uncomfortable. It requires an internalization of constructivist knowledge generation within the scientific process and an acknowledgement that methods may need to be shifted "mid-stream". Adaptation and change can only occur as a result of evolving attitudes and understanding among researchers.
Research context
From 2011 to 2015, the Center for International Forestry Research (CIFOR) and Nitlapan Institute of Research and Development of the Central American University of Nicaragua, implemented a participatory research project, financed primarily by the Austrian Development Agency, with the overall goal of promoting women's participation in community forestry-related decisions in indigenous communities (Mwangi and Larson 2009) The study site is the forested Northern Caribbean Autonomous Region (RACCN for its initials in Spanish) of Nicaragua, an area of social and ecological flux and complexity. In the last census, indigenous Miskitus were the largest group (57%), indigenous Mayangnas represented 4% of the population 2 , and mestizos 3 comprised 36% (INIDE 2005). The mestizo presence has increased steadily over the decade due to migration from other regions of Nicaragua; population growth was double the national rate from 1995 to 2005 (Larson and Mendoza-Lewis 2009). This migration has put pressure on indigenous lands, prompting retaliation, conflict, and complicit illegal land sales. Political conflicts have spread throughout the region, at all levels -from the community to the regional government. Over the course of the project, some community governments split into two rival factions, and incidences of violence have accompanied the conflict.
In 2001 the Nicaraguan Map of Extreme Poverty revealed that this region is the poorest in the country, with close to 95% of population in extreme poverty (INIDE 2001). While at the margins in many respects, the region has nonetheless often been a focal point for outside influence and change, e.g. the establishment of trade networks with the English in the 18 th C., timber and mining enclaves in the 20 th C., and the Sandinista-Contra civil war in the 1980s. Today, life is changing at an ever more rapid pace in the indigenous communities in positive and negative ways. Drug trafficking routes, which pass through the region, have widely affected even the smallest communities, bringing addiction and violence. While communities have traditionally relied on subsistence agriculture for their livelihoods, many young people are migrating away for employment; some communities have seen so many able-bodied men leave to work in the region's gold mines hands-on monitoring practice. Monitoring was selected as a wedge to insert participants directly into the central process of ACM and social learning (Guijt 2007). Short monitoring exercises were held throughout the workshop, and the second day focused on a hands-on monitoring activity.
By early 2014, the team had adapted their methodologies to be able to explore the interactions of community members outside of the structures of workshops. They added multiple exercises in participant observation as well as interviews. They found that women were active in various monitoring activities, such as tracking attendance at community meetings or recording timber harvest volumes. And they identified a way to address the governance concerns that many women and men had raised about their communities -by developing a governance monitoring tool through participatory workshops. Over the course of a year, the team worked with community members to identify and monitor aspects of good leadership, forest management and community governance. As a group, community members developed standards by which to monitor whether community leaders were meeting their expectations. For instance, one aspect of a good leader is that every three months she or he reported to the community assembly the results of their work (good and bad) with regard to the activities planned. These aspects of governance were generally easy to monitor; because they required simple observations, these evaluations could be done in the monitoring group meeting, which was convened every three months. By reviewing the aspects as a group, community members regularly evaluated progress in their communities. The complete list of indicators that the community monitored as well as a description of how the community members created the governance monitoring tool can be found in Evans et al. (2016).
Furthermore, to date, the team had had no formal gender training, and they were navigating complicated gender issues. In order to strengthen the team's ability to understand and address gender issues, they participated in gender trainings in January 2014 and then accompanied specially trained gender workshop facilitators into the communities for a series of workshops with community members. Two of the men participated in a masculinity workshop. The team members also reorganized their community visits so that they could go together and facilitate workshops as a team.
Social learning in the field
This section presents observations on the training and learning process and the reflections of the team members. The findings were processed by aggregating observations and team reflections into thematic groups. Three major themes emerged: 1) learning to learn, 2) learning about monitoring, specifically gender monitoring, and 3) learning about gender and improving the participation of women.
Learning to learn
At the initial stages, because the methodology and its approach to work with communities were new to the field team, the team was sceptical: the goals seemed abstract (learning, adaptation), the methodology seemed open and implement ACM-based projects immediately after completing the workshop. The culminating "test" of the workshop was a full-day activity where the participants became facilitators, leading a mini-ACM workshop with territorial leaders who were enrolled in a leadership program at the local university.
Following the initial workshop in 2011, team mentoring included an effort to provide follow-up at long distance through a series of extended Skype calls, to review progress and the methodology, plan activities for field visits, brainstorm on ideas and techniques appropriate to specific communities, as well as review reports and respond to questions via email. This activity presented a range of challenges. New field team members, who had not participated in the workshop, joined the team. Furthermore, the efficacy of remote training was limited, and conversations about implementing a complex method fell far short of hands-on learning. The ability to work together and collaborate was missing. Therefore, the emphasis shifted to field mentoring. From late 2011 to 2013, a series of five field mentoring experiences provided opportunities to join the team "in action", share on-the-spot feedback within the team as well as take time for extended reflection. The team worked together during these visits, which presented new opportunities to learn from each other, since they had typically facilitated processes separately and individually in the communities. The result was a better context for the work and understanding of challenges. Activities and reflections were documented throughout the process.
During the early fieldwork, the team demonstrated some reticence to start participatory monitoring activities in the field. In spite of various distance mentoring sessions, they had still not begun this phase of work by mid-2012. They were concerned because of lack of experience, but also of doubts about its potential efficacy and relevance (team reflections, 2016). These problems have been observed in other contexts (Colfer et al. 2010). The training shifted to an approach of learning together with community members about how to implement monitoring. Starting in October 2012, the team and trainer experimented with hands-on practice with a range of monitoring activities in the forest and in communities, and the team gained confidence as they learned and practiced monitoring with the community members. The team applied participatory monitoring approaches such as simple data collection with community members, e.g. measuring the height and diameter of tree seedlings that were planted in the ACM activities, and then reflecting on the changes seen. In this case, community members wanted to understand the growth and survival rates of the seedlings. They found that several of the seedlings had disappeared and most had grown very little. They reflected on the conditions that might have led to these results, such as sunlight levels, encroachment by other species, and human disturbance. They then discussed what types of actions could be taken to improve the survival and growth of the seedlings. The point was to demonstrate that monitoring does not have to be complicated, and that it can serve as a starting point for generating reflection and learning.
A second ACM workshop in March 2013 was held over a two-day period with regional partners (government and NGOs), focusing on intensive information-sharing about ACM and learning. After that, the team facilitated the development of new monitoring activities, bringing about greater enthusiasm for participation from women and creating spaces for men and women to share information and appreciate each other's knowledge. "You do not need to be an expert in ACM to do it. You learn as you go," said one team member.
After having engaged with the communities in a full cycle of ACM (planning, implementing, monitoring, reflection), the team evolved a clearer sense of the potential of ACM to improve management outcomes at the community level -and of the importance of monitoring to the inclusion of women. As a group, the team commented that their reflections helped them learn how to learn: "The mutual collaboration of different visions helps to develop confidence among us as a team. The discussions helped us to learn better and gave us the confidence to practice the method in our group work…That is to say, I learned that the more I discussed a problem or needs with people, it generated concerns or questions to continue asking. By asking the question 'why?' unconsciously you learned and understood certain problems, including a vision and certain possible alternatives to plan and work with the people to suggest possible alternatives to improve" (team discussions, 2014).
In the fourth year of the project, after observing that women participate more actively in monitoring than in meetings and identifying that weak governance was a critical obstacle to the participation of women, the team decided to work with the community members in the development of a governance monitoring tool. The governance monitoring activities proved to the team that they could improvise and experiment with the methodologies: "Even though in the beginning we didn't know if [the governance monitoring activities] would work, we found that it helped people reflect, analyse and dialogue about their problems and weaknesses about governance processes. In a certain manner people grew conscious of the things that happened or their situations in the community and how to work to find a solution to the problems that they faced." (team discussions, December 2015).
Learning about monitoring
As mentioned previously, the team had been hesitant to start participatory monitoring activities, but hands-on practice led to increased confidence. The team members realized that monitoring did not have to be complicated; it could be interesting and encourage participation, particularly of women. It did, however, require flexibility and creativity. They implemented monitoring activities such as participatory mapping with students in one community and measuring timber volumes in another. The team reflected on the monitoring experiences and generated the following lessons learned.
First, the elements or aspects to monitor must come from the community members, and participants should develop the monitoring instrument. For instance, in community S 5 , where the community had initiated a reforestation project, the participants brainstormed ideas about what they wanted to 5 Communities are identified by randomly assigned initials to preserve anonymity. unstructured, and the potential for impacts was unclear. At the outset of the project, the team members were comfortable in a workshop meeting format, where roles in the "theatre" of the workshop are predictable. However, they engaged with community members very little outside of the meetings. They were resistant to planning field activities or engaging in participant observation. In the case of at least one of the team members, participating in the daily activities of the community members, such as household chores or helping out with farming, pulled him out of his comfort zone, and it also challenged his status as an educated city person. Thus, when urged to participate in daily activities, he was hesitant. Furthermore, the team members had not yet accepted that making mistakes was part of the process, and that they would not be penalized if activities did not turn out as planned. One team member reflected: "At the beginning we were resistant. Now we know that not all experiences have to come out as successes. We recapture those experiences and learn from them." The team members gradually overcame their hesitancy to experiment and set their own learning cycles into motion. They worked together with community members to experiment with useful, practical, participatory monitoring instruments that generated new knowledge and learning opportunities. The team identified their own adaptive behaviours and learning: "We sensed that we are adapting ourselves. . .we now go to a workshop with more confidence, with more ownership, and we are more collaborative. ACM not only impacts community members, but us as well. We are part of ACM. All of us [field team and community member] are part of the learning process, without distinction".
The path was not a smooth one. The team's reluctance to implement participatory monitoring demonstrated the challenges and obstacles of "learning to learn." In spite of frequent discussions about monitoring via Skype and email from late 2011 through 2012, the team had been reluctant to implement monitoring activities in part from lack of experience and confidence in the method. It was also observed that the team members struggled with navigating their indigenous identity and a reluctance or lack of comfort imposing what they identified as "outside ideas" on their own culture. The male members in particular were uncomfortable talking with women about gender roles. This added a layer of complexity when introducing new ideas or activities that they felt might challenge existing norms or traditions.
It was not until September 2012 when the team and the trainer headed to the field with the community members to develop monitoring methods together that the impacts and importance of monitoring in ACM settled in. Through handson practice in building monitoring mechanisms with communities, the team gained confidence in their abilities and enthusiasm for monitoring. They recognized the potential of monitoring to increase the participation of women and improve learning, and they learned that monitoring activities take the learning to the field and generate excitement about monitor about the newly planted trees, and, based on this, they developed a monitoring instrument in their notebooks.
The team learned that monitoring is not just about writing down data; the reflection and discussions that the monitoring activities generate feed the social learning curve. As one team member articulated: "Monitoring is a conversation." Guiding that conversation required preparation on the part of the field team. The team discovered that the reflection is better prepared and facilitated with open-ended questions, such as: "What did we do? What did you like? What did you not like and why? What did we learn? What was missing? How can we improve?" For instance, when monitoring participation, these questions were useful: "Why are the results like this? How can we improve?" The team learned that monitoring can start with a simple question. For instance, in Community A, the community wanted to monitor timber extraction. An ACM team member guided them first in creating a list of questions (e.g. "How much timber is here at the riverbank?" "What types of species are here at the riverbank?"). Based on that list, they selected one question and built their monitoring activity from there. Furthermore, the monitoring tools must be easily adaptable, and the community members must continue adapting them. For example, there are advantages and disadvantages of supplying pre-printed monitoring worksheets, and the importance of having a monitoring instrument that the community members can develop themselves, using materials that they can get easily, such as notebooks (rather than computer printouts).
The team discovered that constant, informal conversations can be as useful as structured workshops. For example, in Community S, in the midst of a monitoring exercise, one team member took advantage of coming across an illegally felled tree. He did a mini-workshop about ACM with the group in the forest; the specificity of the location contributed to the learning experience because of the concrete example in situ.
Other outcomes of monitoring were identified by the team, such as improved participation, leadership and unity. One monitoring tool that was developed was a worksheet that community members used to monitor participation in meetings. Every time someone spoke or a decision was made, the person's name and gender was noted on the page. At the end of the meeting, the person taking the notes presented the results, and those attending reflected on them. The team discovered that the instrument to monitor participation in meetings served not only as a tool for reflecting on who was participating and why, but it also served to help people listen better and to motivate participation.
Learning about gender
One of the primary project objectives was to improve the participation of women in community-level decision-making, and it was hoped that ACM would create spaces for their greater participation. Through experimentation and reflection, the field team adapted the methodology to explore the interactions of gender relations and participation. The team learned several strategies for improving women's participation in meetings. These included direct questions to women, for example: "What do you think of this, Ms. X ? What do the women think of this? What could the women be doing in this activity?" In workshops, women were more likely to participate in small groups, either women-only, or mixed gender. Break-out groups with women-only groups help create an environment where women were comfortable speaking. On the other hand, putting women in small groups with men (usually best if at least two women per group), gave women an opportunity to show men they knew what they are talking about. In community S, where the women were struggling to get involved, once they were broken out into small groups, women took on important roles: actively participating in discussion, writing on the flipchart paper, and presenting the group's work in front of the larger group.
The team also learned that they must create an environment of trust in order to draw out the participation of women. They made an extra effort to connect with women and encouraged leaders to find and implement ways to help women feel welcome at meetings. Creating a welcoming environment meant inviting women directly, personally, and visiting women who had participated before in order to invite them to participate again. It required requesting permission from the community authorities 6 to involve them and requesting that the authorities accompany the team members house-to-house in order to invite the women.
As the fieldwork proceeded, a moderate increase in the attendance of women at meetings was observed. However, men continued to dominate meetings and workshops, both in discussions and decision-making. When the ACM activities were moved out of a schoolroom or community house, which were typical meeting spaces, women's participation improved, with more active discussion and expression of their opinions. Women's participation was strongest in the monitoring activities in the field, where their participation in discussions and reflections was at times fully equivalent to that of men's. This was contrary to what community leaders had said -that women would not show up for work in the forest or participate. In fact, monitoring tended to create a more welcoming space where women were more likely to participate as equals with men.
As the team engaged in understanding the role of gender relations at the community level, their own understanding of gender evolved. The two male members of the team, in particular, reflected perspectives that are common in the region among men at the outset of the project. "In my prior experience, the term 'gender' was very Western and a cultural practice from the West," said one team member. Another team member said that initially he "thought about it before like people did in the communities, that it is just about equality and rights. . ." For instance, he explained that "During the gender workshops in communities A, F and K, the people expressed that gender had to do with equal rights of men and women, according to the community members. They also said that 'we all have rights, as men and women', but in truth it was complicated for the community members to understand the term 7 , and specifically for me too, because gender is something complicated to understand from the perspective of our cultural traditions. . . . Equity is the one way that we understood [gender]: equal conditions, treatment, opportunities, roles, without discrimination in participation." (team reflection, June 2014) In particular, the male members of the team expressed more reluctance than the female members to question or challenge the gender roles that constrained women's participation in meetings. Again, gender was perceived as a concept that was being imposed from outside their culture, and they were uncomfortable discussing or challenging the gender roles in communities because, in some thinking, preserving indigenous culture and preserving gender roles are linked. That perception evolved and became more nuanced and complex: "Through the ACM process I learned that gender is a concept about relationships and values and complementarity." Another team member contextualized the complexity of understanding gender in development projects and the tension with traditional societies: "In the end, gender equity is a social construction with a vision of human development in an equal manner. . . . But in the culture of the rural indigenous, people know how contradictory power dynamics are. For example, the man is the one who decides everything, and women second, but the focus on gender challenges the authority figure of the man in the family and the community as it does in Nicaraguan society" (team reflection, June 2014).
These changes in perspectives on gender were not simply conceptual; by changing their assumptions about how women and men relate and interact in different spaces, their new frameworks made it possible to understand women's and men's behaviours and obstacles to participation. For instance, when encountering little participation of women in meetings or in leadership positions, the male team members at first repeated what the male leaders of the community said: that the women are given opportunities to participate in meetings, but that women simply do not want to. In other words, it is women's fault for not participating. These perspectives were reinforced by what they saw in meetings and workshops: men participating and women sitting silently, with a few exceptions. Insights into the complexities and contradictions of these gender power have been explored in India and Sweden as well (Arora-Jonsson 2009).
However, when the team members began to engage in other methods -participant observation, participatory monitoring, interviews and activities outside of the meeting spaces -they observed significant obstacles to women's participation, including social exclusion and physical violence. They noted how the three most active female leaders were each sanctioned by the community at certain points. In the worst cases, one of the woman leaders was physically abused by her husband. The team learned that barriers to participation are complex, and that more in-depth understanding of dynamics at the household level would be necessary in order to fully understand the constraints on women's participation. They also noted that outside of a meeting -particularly in the field -gender roles were less rigid, and women assumed leadership roles. In one instance, during a morning activity in the forest in community K, at the end of the activity, Ms. S spontaneously led a group reflection on the activity. In contrast, in the afternoon, she sat silently in the community meeting.
The governance monitoring tool further opened up spaces for participation via the monitoring activities. As one of the team members said: "With this process, the women woke up; they gave opinions more, expressing their concerns, needs and lack of compliance by authorities who made decision about natural resources, and in a certain way they demanded that they be taken into account in the consultations about their resources or that they know better how [resources] were being managed by the authorities, with greater transparency of funds and taxes" (team reflection, December 2015). Also, "ACM promotes gender participation in a more diplomatic way through activities. For example, in the ACM workshops on monitoring, there was an activity on gender, but no one knew that that same activity encouraged the participants to have equal opportunity and rights. In this sense I believe that approach to gender in ACM works in the communities" (team reflection, December 2015).
The team began to deepen their understandings of gender: "Through practice, I learned that the community members understand gender without defining the term conceptually; better put, it is understood as complementarity in their diverse activities. For example, in the planting of rice, nobody thinks about it with a gender focus, but in practice, men and women are sharing the task in a collaborative way" (team reflection, December 2015).
In the process of encouraging the uptakes of adaptive collaborative management within the communities, it was found that the research team too adopted adaptive collaborative behaviours, learning and adapting their own behaviours. In other words, they "learned how to learn". Learning how to do ACM together -deliberately reflecting on their attitudes and the roles and interactions of women and men -generated new knowledge about gender. Furthermore, the team learned and adapted the methodology as their knowledge about gender evolved: they applied ACM learning cycles to their ACM activities. Training the team in ACM methods created an environment of constant social learning in multiple nested cycles, or multi-loop learning.
CONCLUSION
When the research project started in 2011, the initial focus was on training the field team in facilitation skills so that they could implement ACM in the field and promote women's participation. Ironically, it was not explicitly anticipated that the trainer and team would also need to learn how to learn.
As the project progressed, it grew increasingly clear that the researchers were also actors within an evolving scenario, where the complexities of gender and governance with the environment were challenging not only to understand, but also to act upon. The environment required the team to adopt "the learning way". Constant reflection and discussion created triple-loop learning, where the team members adapted the way that they engaged in social learning. The shifting frames of reference required conscious discussion and continuous reflection not only about activities and outcomes but focused too on how the team's knowledge was changing, questioning their own ideas, beliefs and assumptions. At times team members were uncomfortable, particularly when their experiences challenged their own assumptions and identities, with regard to indigenous culture and gender.
The equally complex nature of gender relations requires learning how to learn about the ways that gender shapes communities and the interaction of individuals with the forest. It requires examining one's own gender perspective and biases in order to understand better the dynamics of gender relations on the ground. Most importantly, it confirms that in order to work in and engage with the complex environments of forest communities, everyone -trainers, researchers, and field practitioners too -must learn how to learn. | 2020-06-11T09:03:01.249Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "09c45ded192e02c3297b54caa2ccf29fa8ed2c0c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1505/146554820829403504",
"oa_status": "HYBRID",
"pdf_src": "BioOne",
"pdf_hash": "86655cf0c6c277cc89ebd9ac777b18db595d5915",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
198849641 | pes2o/s2orc | v3-fos-license | Simulations of Yarn Unwinding from Packages
Yarn unwinding from stationary packages has an important role in many textile processes. In order to achieve high unwinding velocity that can lead to increased production rate, it is necessary to develop packages with a suitable geometry, dimensions, and winding type. The optimal design of the package leads to an optimal form of the balloon and low and uniform tension at high unwinding speed. In this work I will show a simple mathematical model which can be used for simulating the unwinding process. Using experimental values I will find a relation between the angular velocity of the yarn around the axis and the tension. This will allow me to calculate the oscillations of the tension in the yarn during the unwinding from packages of different geometries and with different winding angles. I will find an optimal design for a package of a new generation.
Introduction
At the end of the production process of a spinning factory, yarn is wound on packages. Therefore, packages are a crucial intermediate product of textile industry. The rapid development of fast-running weaving and knitting machines has led to the situation where unwinding yarn from packages is one of the main production bottlenecks. For this reason it is of utmost importance to determine the package geometry and the winding angle which allow to maximize the unwinding velocity given the allowed highest yarn tension [1][2][3][4][5][6].
The yarn is being withdrawn with velocity V through an eyelet, where we also fix the origin O of our coordinate system ( Figure 1). The yarn is rotating around the z axis with an angular velocity ω. At the lift-off point (Lp), the yarn lifts from the package and forms a balloon (the name stems from the fact that in one period of rotation, this part of the yarn describes a surface of evolution that has a form of a balloon). At the unwinding point (Up), the yarn starts to slide on the surface of the package. Angle ϕ is the winding angle of the yarn on the package. In order to be able to compare various package designs, it is necessary to determine the influence of the winding angle of the package on the angular velocity of the yarn forming the balloon, since the angular velocity determines to a great extend the yarn tensions. The results of simulations will be used to suggest a design for packages of new generations, on which two kinds of layers would alternate: parallel-wound layers and layers with high unwinding angle. 1
Machines and materials
The current angular velocity ω for cylindrical packages is computed by using the relation [2,3] In order to perform simulations, we additionally need a relation between the unwinding velocity and the yarn tension. The tension is largest in the eyelet through which the yarn is being pulled [3].
We measured tension for parallel-wound cylindrical packages of different dimensions and for different unwinding velocities ( Table 1). For such packages the winding angle is ϕ $ 0°, and we obtain ω = V/c. This is the expected result since in this case the unwinding velocity V equals the circumferential velocity of the lift-off point, which is given by cω [3]. Figure 2 shows the unwinding yarn system. Yarn is withdrawn from a fixed packages by a Lesson yarn drive at transport speed of up to 2000 m per minute. Support for the guide is fixed to which the sensor for measuring the tension of the yarn is installed.
Parameters
Range of values
Methods and simulation
In a recent paper, we developed a mathematical model [7,8]: which would permit to simulate the process of unwinding ( Figure 3). In our simulation we calculate the winding angle using the function where ϕ is the maximal angle of wind, then we determine the corresponding angular velocity ω, and finally we obtain an approximation for the tension using data from Section 2. In our calculations we considered unwinding for two consecutive layers of yarn, so that the package radius remains approximately constant during this time. The graph below presents the changing tension in the yarn as we unwind yarn from a cylindrical package. The time is expressed in units of phase: 2π corresponds to one cycle of unwinding point up and down the package.
In Figures 4 and 5, we show the results for the oscillations of tension for a range of four winding angles ϕ $ 0, 10, 20, and 30°for a very small package radius of c = 70 mm and for two different unwinding velocities, V = 1000 and 1400 m/min, respectively. The tension is a function of angular velocity, so it is oscillating in agreement with Eq. (1). When the direction of unwinding changes near the edges of the package, the yarn tension undergoes a rapid change. Such sudden jumps lead to strong strain in the yarn, and the yarn can be damaged or even broken in two parts. In this case we again observe very high tension in the yarn for all the enumerated winding angles. The tension oscillates from 0.05 to 1.8 N. In such case, the unwinding would fail. Figure 6 shows the time dependence of the yarn tension for unwinding velocity of V = 2000 m/min. The winding angle is fixed at φ 0 = 5°, and we consider package radii in the range from c = 70 to 500 mm. For large package radius, the tension is small, but it becomes sizable already at rather low radius between c = 100 and 200 mm. Nevertheless, the highest calculated tensions remain rather low, T 0 = 0.7 and 1.4 N. We therefore make the following important conclusion: the yarn tension can be strongly reduced by making use of packages with large radius.
The variation of the radius of the topmost layer, the angular velocity, and the tension in the yarn during the unwinding from a parallel-wound cylindrical package at V = 2000 m/min, ϕ = 5°, c = 70-200 mm (dashed line), and c = 160-500 mm (full line).
In Figures 7-10, we compare the time dependence of the yarn tension for two package radii, c = 500 and 160 mm, and for three winding angles 0, 5, and 10°at two unwinding velocities, V = 2000 and 1500 m/min. For package radii 500 mm, we find suitable tensions T = 0.015 and 0.03-0.04 N for all winding angles. For package radius 160 mm, we find acceptable tension only for winding angles ϕ $ 0 and ϕ = 5°: in these cases the tension rises at most to 0.055 N, which is at the higher end of the acceptable values. At ϕ = 10°we observe tensions around 0.08 N, which exceeds the limit. Figures 11-13 show the dependence of the amplitude of tension oscillations as a function of the package radius (from 70 to 500 mm) and winding angle (from 0 to 20°) for three different unwinding velocities: 1000, 1500, and 2000 m/min. For all unwinding velocities, the oscillation amplitudes are larger for packages with smaller radius and large winding angle. In particular, the oscillations are very large for radii lower than 160 mm and for winding angles exceeding 5°.
The oscillations of yarn tension are related to the variation of the angular velocity of yarn rotation around the package axis.
The amplitude of the angular velocity oscillation is In the region of interest, i.e. for ϕ < 25°, we have tan ϕ $ ϕ. We get This means that the amplitude of the angular velocity oscillation is approximately proportional to the unwinding velocity and winding angle, but inversely proportional to the package radius.
From this relation we can estimate the yarn tension oscillation, knowing the dependence between the angular velocity and the tension that can be experimentally measured. We can also make use of Figure 14, which can serve to roughly estimate the amplitude of oscillations. We determine the average angular velocity during unwinding through Here we made use of the small-angle approximation cos (ϕ) ≈ 1. This relation is applicable in the same range as the expansion for the tan function, and the error is also of the same magnitude.
In Figure 14 we determine the interval from ω 0 À Δω/2 to ω 0 + Δω/2 and read off the interval of yarn tension it corresponds to. The amplitude of yarn tension oscillations is then simply the difference between the maximal and minimal values.
This graphical method for making estimations can be applied to better understand Figure 15, where we plot the dependence of the oscillation amplitude on the package radius for winding angle ϕ = 10°and unwinding velocity V = 2000 m/min. This is, in fact, a section of Figure 13 at constant angle ϕ. As a rough rule, the amplitude of the oscillations decreases with increasing package radius c. In addition, Figure 15. Cross section of the plot at ϕ =10°. however, one observes a peak in the range of radii from c = 110 to 180 mm. This is due to the particular dependence of the tension on the angular velocity, as shown in Figure 14. In some ranges of ω, this dependence is steeper, for instance, from ω = 200 to 240 rad/s. In this interval, oscillations of angular velocity lead to large amplitude of tension oscillations. In other ranges, for instance, from ω = 250 to 300 rad/s, the tension does not depend much on the angular velocity; hence the yarn tension oscillations are small.
The cross section of the previous figure at the winding angle ϕ =10°.
In Figure 16 we plot the dependence of tension oscillation amplitude from the package radius and unwinding velocity at constant winding angle ϕ = 5°. We notice that the lines of constant amplitude are simply straight lines. This means that the amplitude of tension oscillations at constant angle depends only on V/c, as expected from Eqs. (4) and (5). This suggests the possibility to make a compromise: if it is known that the yarn is damaged at some given amplitude of tension oscillations, then the possible choices of package radius c and unwinding velocity V lie of a straight line. One can thus use small package radii with small unwinding velocities or large packages with correspondingly higher unwinding velocities. It is also apparent that during unwinding from packages with a radius of 150 mm, it is possible to unwind at all velocities shown with a possible exception of those near the maximum values of V = 2000 m/min.
Packages with alternating layers
To reduce the tension oscillations, we devised packages of alternating layers. They are constructed so that: a. When unwinding point moves backwards, the parallel layers are being unwound.
b.When unwinding point moves forward, the layers with high winding angle are being unwound. Between two parallel layers, there should always be one layer with higher winding angle in order to avoid interweaving of parallel layers. In Figures 17 and 18, we compare packages with alternating layers with regular cross-wound packages. The unwinding velocity is V = 2000 m/min for two package radii c = 200 and 150 mm. The winding angle of cross-wound layers is ϕ = 10°. As expected, the packages with alternating parallel-wound and cross-wound layers significantly reduce the tension. We have thus achieved an elimination of high tension spikes which lead to yarn breaking in conventional cross-wound packages. For this reason, the new-generation packages would allow unwinding at higher velocities than traditional packages.
In Figures 19-21, we compare the amplitude of the yarn tension oscillation in regular cross-wound packages and in new-generation packages for different unwinding velocities from V = 1000 to 2000 m/min and for different winding angles, from ϕ = 0 to 20°. Package radii are 120, 150, and 200 mm. The amplitude of tension oscillation is larger for large unwinding velocities and for larger winding angles. This is the case for all package radii. The totality of the results indicates that this dependence is significantly larger for conventional cross-wound packages, where the oscillation amplitude becomes very large, while the oscillations are notably lower in the new-generation packages. The differences are largest for the package radius of c = 200 mm, where the difference at unwinding velocity of In Figures 22 and 23, we show the amplitude of the tension oscillations in newgeneration packages as a function of package radius and unwinding velocity at constant winding angle of cross-wound layers of ϕ = 10°. The first figure suggests that at V = 2000 m/min, the package radius should be at least c = 150 mm in order to avoid yarn breaking.
Conclusion
The problem of high yarn tension and its high oscillations can be avoided by constructing packages of new generations. From this study, the following conclusions can be drawn: • In designing new package times, it is desirable to limit the maximal value of the tension in yarn but also the amplitude of the tension oscillations.
• The yarn tension can be strongly reduced by making use of packages with large radius.
• The alternating design helps to reduce sudden change of tension, and it leads to higher stability of the unwinding process.
With this design tension and the amplitude of the tension oscillations can be significantly reduced. In this case it is possible to safely unwind from packages of smaller radius even at higher unwinding velocities. This would allow higher production rates without increased downtime due to yarn breaking. Based on the results of our calculations, we propose a package with the following characteristics: the inner cylinder radius should be 150 mm (arguably even 100 or 120 mm), and the outer package radius should be from 400 to 500 mm. Parallel layers should have a winding angle that is as close to 0 as possible, while the winding angle of other layers should be no higher than 10°. | 2019-07-26T12:37:15.205Z | 2019-06-26T00:00:00.000 | {
"year": 2019,
"sha1": "2cf91643fd912ac7d6e0b571425abcf487e33309",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/67473",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c53d195c99eea88d14daa5cde6cd9222edc87a0a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
270256280 | pes2o/s2orc | v3-fos-license | Shared genetic aetiology of respiratory diseases: a genome-wide multitraits association analysis
Objective This study aims to explore the common genetic basis between respiratory diseases and to identify shared molecular and biological mechanisms. Methods This genome-wide pleiotropic association study uses multiple statistical methods to systematically analyse the shared genetic basis between five respiratory diseases (asthma, chronic obstructive pulmonary disease, idiopathic pulmonary fibrosis, lung cancer and snoring) using the largest publicly available genome wide association studies summary statistics. The missions of this study are to evaluate global and local genetic correlations, to identify pleiotropic loci, to elucidate biological pathways at the multiomics level and to explore causal relationships between respiratory diseases. Data were collected from 27 November 2022 to 30 March 2023 and analysed from 14 April 2023 to 13 July 2023. Main outcomes and measures The primary outcomes are shared genetic loci, pleiotropic genes, biological pathways and estimates of genetic correlations and causal effects. Results Significant genetic correlations were found for 10 paired traits in 5 respiratory diseases. Cross-Phenotype Association identified 12 400 significant potential pleiotropic single-nucleotide polymorphism at 156 independent pleiotropic loci. In addition, multitrait colocalisation analysis identified 15 colocalised loci and a subset of colocalised traits. Gene-based analyses identified 432 potential pleiotropic genes and were further validated at the transcriptome and protein levels. Both pathway enrichment and single-cell enrichment analyses supported the role of the immune system in respiratory diseases. Additionally, five pairs of respiratory diseases have a causal relationship. Conclusions and relevance This study reveals the common genetic basis and pleiotropic genes among respiratory diseases. It provides strong evidence for further therapeutic strategies and risk prediction for the phenomenon of respiratory disease comorbidity.
INTRODUCTION
4][5][6][7] However, the mechanism of such comorbidity remains unknown.Genome wide association studies (GWASs) have identified respiratory diseaseassociated susceptibility loci known as singlenucleotide polymorphism (SNP).][10] Considering that previous studies have revealed significant genetic association loci for these diseases, shared genetic mechanisms may provide important insights into the comorbidity of respiratory diseases.
A gene controlling two or more traits is commonly considered a pleiotropic locus.Identifying pleiotropic loci is an important strategy for resolving genetic mechanisms.It is a common way to identify pleiotropic loci by directly taking the intersection of significant associated loci of each trait.However, such a strategy has low power due to the WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Respiratory diseases are the most common cause of morbidity and mortality globally.Even more worryingly, the concurrence of multiple respiratory diseases is more frequent than expected.
WHAT THIS STUDY ADDS
⇒ It remains unclear whether and how common genetic components contribute to the comorbidity of respiratory diseases.This study reveals the mechanism of comorbidity in respiratory diseases and provides a theoretical basis for the diagnosis and prevention of multiple comorbidities in clinical practice.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Our results have identified a number of old and new genetic targets for respiratory diseases that may help in the development of new drug targets or new utilisation of existing drugs.
limited sample size and misses a large number of potentially shared loci.Indeed, enlarging the sample size entails substantial costs.Another effective approach is to use joint modelling 11 of genetic correlations between phenotypes for joint analysis, which will reveal new genetic loci and identify potential shared loci among diseases.Previous studies have also attempted to explore the genetic overlap 12 13 or causality 14 that exists between a number of respiratory diseases, generally confined to two-by-two and with limited sample sizes.The causes and structure of the existence of commonalities among respiratory diseases remain largely unknown.Therefore, it is important to further explore the common genetic factors that underlie the commonality between respiratory diseases.This study uses a variety of statistical genetics methods to comprehensively explore the common genetic basis between five respiratory diseases, including asthma, COPD, IPF, LC and snoring.First, we evaluated global and regional genetic correlations between diseases.We then used the Cross-Phenotype Association (CPASSOC) 15 to identify pleiotropic genetic variants or loci between diseases.In addition, Hypothesis Prioritisation in multitrait Colocalisation (HyPrColoc) 16 analysis was conducted to identify colocalised loci and traits.We also performed gene-level analyses to identify candidate pleiotropic genes through various algorithms.Finally, Mendelian randomisation (MR) analysis was performed to probe different types of pleiotropy, namely vertical and horizontal pleiotropy.
In summary, through a series of multidimensional independent and combined analyses, we identified potential pleiotropic loci between five respiratory diseases that could be prioritised as potential targets for drug development and repurposing due to their potential to simultaneously prevent or treat these diseases.
Data processing
We used five publicly available GWAS [17][18][19] summary statistics of respiratory diseases (online supplemental eTable 1).We attempted to use snoring as a substitute for OSA.On one hand, snoring is a primary symptom of OSA, and on the other hand, as an independent disease phenotype, snoring has a high genetic correlation with OSA.The total sample size for asthma is 137 6071, with 121 940 cases.For COPD, the total sample size is 995 917 with 58 559 cases.IPF has a total sample size of 953 873 with 6257 cases.LC has a total sample size of 85 716 with 29 266 cases.The total sample size for snoring is 408 317 with 152 302 cases.We performed rigorous quality control.The detailed characteristics of the GWAS summary statistics and quality control process are shown in online supplemental eMethods.The overall study design is shown in figure 1.
Global genetic and local correlation analysis
Both linkage disequilibrium score regression (LDSC) 20 and high-definition likelihood (HDL) 21 were applied to assess global genetic correlations between diseases.We did not restrict the intercept term for LDSC to assess population stratification within individual GWAS and sample overlap between pairs of GWAS.
We used the Local Analysis of Variant Association (LAVA) 22 to estimate genetic correlations in independent regions of the genome for each pair of traits.LAVA allows us to more effectively clarification of the specific effects of regional genetics on overall genetics.A false discovery rate (FDR) was used to correct for all of the above results.The significance threshold was set at p adjusted <0.05.
Multiple-trait meta-analysis
To identify potential pleiotropic SNPs of respiratory diseases, the SHet model provided by CPASSOC was executed to perform a multitrait meta-analysis. 15The CPASSOC method enhances the sample size by incorporating information from multiple GWAS to identify novel significant SNPs.CPASSOC allows for heterogeneity and sample overlap.SNPs with P CPASSOC <5×10 −8 were considered significant pleiotropic SNP.
Genomic loci characterisation and functional annotation
We used functional mapping and annotation of genetic associations (FUMA) 23 to annotate significant genetic loci for the CPASSOC results, with the parameter settings shown in online supplemental eMethods.Combined Annotation Dependent Depletion (CADD) scores and RegulomeDB (RDB) scores were provided with FUMA, and SNP with CADD scores >12.37 were considered potentially deleterious variants.We also annotated the GWAS of five respiratory diseases for comparison with CAPSSOC results.
Multitrait colocalisation
We performed HyPrColoc 16 analyses to further identify common causal variants for each pleiotropic locus defined by FUMA.HyPrColoc divides traits into groups, with traits in each group sharing a casual SNP.Posterior prob >0.7 results in the final colocalised locus.Additionally, we conducted sensitivity analysis on the colocalisation analysis results using different prior probabilities (1×10 −4 , 1×10 −5 ).
Candidate gene analysis
We searched for genes that overlapped with the pleiotropic loci and then subjected them to gene-based association analysis with multimarker analysis of GenoMic annotation (MAGMA). 24FDR-corrected p<0 .05 was considered a significant result.Based on lung and whole blood tissues provided by GTEx V.8, 25 we used Functional Summary-based Imputation to perform transcriptomewide association 26 (TWAS) analyses on GWAS for single traits.TWAS results for single traits were combined to clarify whether there is gene sharing between multiple traits at the transcriptome level.We also conducted the proteome-wide association study 27 (PWAS) study.PWAS assesses associations between plasma proteins and respiratory traits with an analytical strategy consistent with TWAS.Significant results were defined as p adjusted <0.05 after correction using the FDR method.
Pathway and GTEx tissue enrichment analysis
We conducted a MAGMA 24 gene-set enrichment analysis based on the Gene Ontology (GO) and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway databases.9][30] Significant results were defined as p adjusted <0.05 after correction using the FDR method.In PCGA, tissue results are based on GTEx V.8 and single cell results are derived from human(based on PanglaoDB, Human Cell Landscape and Allen Brain Atlas) and mouse (based on PanglaoDB) datasets.FDRcorrected p<0.05 was considered a significant result.
Mendelian randomisation analysis
We used bidirectional two-sample MR analysis to explore the causal relationships between the five phenotypes.The primary method is Multiplicative Random Effects Inverse variance weighted (IVW-MRE).IVW-MRE provides more accurate outcome estimates even in the presence of heterogeneity.Complementarily, we applied MR-Egger regression 31 and the weighted median 32 method as additional analytical strategies alongside IVW.To verify the accuracy of the results, we used sensitivity analyses such as MR-Egger intercept, leave-one-out analysis, MR-Steiger directionality test and F-statistic.The detailed methods are in online supplemental eMethods.For more information on SNPs, please refer to online supplemental eTables 2 and 3. Significant results were defined as p adjusted <0.05 after correction using FDR method.
Patient and public involvement
This study employed GWAS for data analysis and did not directly involve patients or the public.
Global and local genetic correlation
The LDSC results indicated significant heritability for all diseases (online supplemental eTable 4), with estimates ranging from 0.3% to 8.4%.Significant genetic associations (p adjusted <0.05) were found for all 10 pairs of traits in 5 respiratory diseases.The strongest genetic association was found between asthma and COPD (rg=0.7052),indicating a highly shared genetic component between the two.COPD and LC were also strongly correlated (rg=0.5944),suggesting a high degree of concordance between the two in terms of genetic components, but not identical.HDL described nearly identical results to LDSC except that IPF and LC suggested no genetic association in HDL (figure 2A,C and online supplemental eTables 5 and 6).Additionally, the results suggest a potential sample overlap between GWAS with selected respiratory diseases (online supplemental eTable 5).There may be some common genetic components among respiratory diseases, but there is no evidence of the extent to which these genetic components are shared and the specific mechanisms involved in these five diseases.
The results of LAVA showed a total of 499 regions (figure 2B,D and online supplemental eTable 7) where at least one pair of trait pairs existed with significant localised genetic correlations (p adjusted <0.05).Of the 499 localised regions, 87.58% were positively correlated and 12.42% were negatively correlated.The region 6:32629240-32682213 had the most six pairs of related traits and all traits occurred at least once.The asthma and COPD trait pairs had the highest significance in LDSC.LAVA indicated that this trait pair possessed the most regional genetic associations, all of which were positive.
16 indexed SNPs had CADD scores >12.37 and 6 were mRNA exon variants (online supplemental eTable 9).For example, index SNP rs11571833 is in the exonic region of the gene BRCA2 with a CADD score=36.BRCA2 is a common tumour susceptibility gene, and studies 33 have found that BRCA2 mutations directly double the risk of developing LC.The index SNP rs34712979 with a CADD score of 22.8 is in the intronic region of the gene NPNT.NPNT was found to be significantly expressed in alveolar cells and lung fibroblasts, 34 and to regulate 35 36 the pathogenic risk of respiratory diseases such as COPD and LC.
Identification of colocalised loci
HyPrColoc obtained 15 (10%) loci (PP>0.7)(figure 3 and online supplemental eTable 11).Detailed information on all SNPs within the colocalised locis can be found in online supplemental eTable 12.The trait set with the highest posterior probability was asthma and COPD, colocalised at locus 6:19837774-19844117 (PP=0.9687,casual SNP: rs9350191).Nine of the 15 colocalised loci contained Asthma and COPD.Of all casual SNPs, five (33%) were intronic variants and eight (53%) were intergenic variants.Two (13%) were exonic variants and both were mRNA exonic variants: rs28929474 (SERPINA1), rs1641512 (ATP1B2).Notably, 7 of the 15 colocalised loci were located within the newly discovered pleiotropic loci.It is worth mentioning that in loci5, the SNP with the highest posterior probability is rs34517439.The nearby gene, DNAJB4, is believed to be involved in regulating the growth of non-small cell LC. 37Recent research reports that the deletion of this gene leads to a novel type of myopathy primarily characterised by early-onset respiratory failure.This suggests that DNAJB4 is closely associated with the development and progression of respiratory system diseases. 38The sensitivity analysis results include heat-maps and similarity matrices (online supplemental eTable 11, marked with an asterisk ** ).We conducted quantitative statistics (similarity matrices) on the sensitivity analysis for different prior probabilities and found that the variation in seven locis (5, 57, 73, 86, 119, 135 and 152) did not exceed 20%.The remaining locis seem sensitive to changes in prior probabilities but still form a good cluster of colocalised traits.Even with different prior probabilities provided, six locis (17, 45, 123, 131, 132 and 139) with larger fluctuations also have a probability greater than 50% of forming stable colocalisation clusters.We also examined the LD relationships between the causal SNPs identified in the Hyprcoloc analysis and
Open access
surrounding SNPs, finding that most regions do not have variants in extremely high LD with the causal variants (online supplemental eTable 13).In five colocalised locis (90, 123, 132, 135 and 139), causal variants exhibit an LD relationship with r 2 ≥0.9, accounting for 33.3% of all locis.
Candidate gene identification
We identified 739 protein-coding genes that overlapped with the 156 pleiotropic loci through gene position mapping.MAGMA analysis further identified 678 significant pleiotropic genes (online supplemental eTable 14).Among them, the most significant gene was IL1R1 (p=2.30×10−16 ).The significant genes obtained based on MAGMA were analysed by TWAS and PWAS.A total of 3600 tests were performed for the TWAS analysis (online supplemental eTable 15).593 tissue-gene-trait pairs were significantly associated (p adjusted <0.05): asthma (196), COPD (125), IPF (75), LC (93), snoring (104).199 (46%) genes reached significant levels in at least one tissue, suggesting that the effects of pleiotropic genes on phenotype are influenced by the amount of mRNA expression.Of these, 108 genes shared among different traits and were mostly tissue-specific.PWAS results (online supplemental eTable 16) indicated that 19 plasma proteins were expressed at significant levels (p adjusted <0.05).Nine of these plasma proteins share expression in two or more respiratory diseases.TWAS was compared with PWAS, and two significant protein-coding genes (IL1R1, PRSS8) were identified in the colocalised loci.IL1R1 and PRSS8 were associated with two or more traits at all levels.
Biological pathway, GTEx tissue and SNP-heritability enrichment MAGMA gene-set analysis identified 130 significantly enriched biological pathways (p adjusted <0.05), including 115 GO pathways and 15 KEGG pathways (figure 4A and online supplemental eTable 17).The enriched pathways are mainly focused on the immune system.An example is interleukin-21 (GO: 0032625), which has been found in a large number of studies to play a huge effect on immune system diseases and cancer.
GTEx tissue enrichment analysis showed that five respiratory diseases were significantly enriched in 33 tissues (figure 4B).The most significant tissue was lung.A total of 167 human monocytes were significantly enriched, mainly in lung, kidney and tracheal tissue single cells.A total of 493 mouse monocytes were significantly enriched, mainly in lung, spleen, arterial and tracheal tissue single cells.
Mendelian randomisation
Bidirectional MR analysis of 20 exposure-outcome trait pairs in 5 respiratory diseases.IVW results showed a total of 10 pairs of traits with significant causal associations (p adjusted <0.05), all of which were positively correlated (figure 5 and online supplemental eTable 18).Evidence of horizontal pleiotropy existed for two trait pairs: asthma-COPD and asthma-LC.Asthma-COPD, asthma-LC obtained significant estimates consistent with IVW using MR-Egger.This suggests that they remain causally related after accounting for horizontal pleiotropy. 31ll other sensitivity analyses supported significant results (online supplemental eTable 19).Causality among the five respiratory diseases in the MR analysis was unidirectional, and the associations of exposure-outcome trait pairs were not driven by a single SNP.
DISCUSSION
We used multiple trait association analysis to explore shared genetic factors among five respiratory diseases.This study presents a comprehensive analysis revealing the potential genetic basis of pleiotropic association loci, colocalised trait subsets, biological pathways and tissue specificity of pleiotropic genes.Different respiratory diseases often exhibit highly specific clinical features, but respiratory comorbidity is common.The results of this study support the idea that comorbidity between respiratory diseases may be driven by a common genetic basis.
We identified 61 related candidate genes in the 15 colocalised regions identified.The most significant gene, IL1R1, is located in the region 2: 102681836-102 801 334.Asthma and COPD trait pairs colocalise at this locus, which has a shared causal SNP of rs11679146.TWAS results from asthma showed that IL1R1 was significantly expressed in lung tissues but negative in whole blood tissues.Similarly, TWAS results in COPD had positive results for IL1R1 in lung tissue but not in whole blood tissue.This suggests the presence of tissue specificity in the IL1R1 gene.IL1R1 is bound by IL-1α and IL-1β inflammatory factors.IL-1α and IL-β have been identified Open access in many preclinical models as mediators that play a huge role in the respiratory inflammatory response.They regulate the secretion of neutrophils, macrophages 39 40 and have an active role 41 in the development of emphysema.Clinical cohort studies have found that upregulation of IL-1 pathway mediators is associated with frequent worsening of obstructive airway disease. 42IL1R1 appears to be a marker of neutrophil inflammation and airflow obstruction and is a potential therapeutic target for Asthma.IL1R1 appears to be a marker of neutrophil inflammation and airflow obstruction and a potential therapeutic target for asthma. 8Thus, IL1R1 gene expression in lung tissue may affect both Asthma and COPD.It may be feasible to develop targeted drugs against IL1R1, which needs to be justified by more research data.A feasible therapeutic target for asthma, IL1R1 appears to be a marker of neutrophil inflammation and airflow obstruction.
The TNFSF12 gene can be found in the region of 17:7447375-7466207.Asthma and snoring share this locus and the most likely causal SNP is rs1641512.TNFSF12-encoded TWEAK and its receptor factor-inducible 14 (Fn14) have been shown to induce the production of IL-8 and GM-CSF by the human bronchial epithelium 43 and to contribute to airway inflammation by activating the NF-B/STAT3 pathway to produce a variety of inflammatory mediators. 44A more intriguing finding 45 was the direct detection of significantly higher TWEAK in the sputum of asthmatic patients, and the quantity was positively connected with the degree of the disease.Obesity is the main OSA risk factor. 46Certain clinical studies found that obese patients' adipose tissue contained increased concentrations of TWEAK/Fn14. 47Snoring, the most noticeable OSA symptom and a simple source of intermittent hypoxia.It has been reported that the mechanism of Hypoxia-inducible factor (HIF)-1α in OSA-related complications 48 49 while TWEAK has been discovered to increase HIF-1 expression. 50Thus, TWEAK/Fn14 may be a good path to explore for intervention treatment of patients with asthma and OSA.
In addition to the genes already mentioned, additional genes, including CFTR, EPHX2, FTO and others, were found to be often associated with respiratory illnesses.Additionally, it was found that several genes, including BCKDK, SETD1A, VKORC1, PRSS8 and others, temporarily lack associations with respiratory illnesses.Significantly, the PRSS8 gene, positioned at 16:31137754-31152151, shows colocalisation in both asthma and snoring.Both TWAS lung tissue and whole blood and PWAS contained this common gene.According to reports, the PRSS8 pathway plays a crucial role in controlling sodium transport in alveolar epithelial cells and pulmonary fluid balance, 51 but further research is needed to determine how it relates to respiratory illnesses.
The majority of the shared pathways among the five respiratory illnesses emphasised immunology, which nearly matched single-cell enrichment findings like the high expression of macrophages in the trachea and T cells in lung tissue.The immune system's undeniable implication in COPD has dominated basic research for years. 52uch earlier, asthma has been connected to the immunological response 53 and it has also been demonstrated that immune dysregulation triggers IPF. 54Moreover, an important factor in controlling OSA's inflammatory response is the activation of the NLRP3 inflammasome.LC has also received a lot of attention lately in the immunological microenvironment 55 and immunotherapy 56 of LC.This study provides additional favourable support that the immune system may be an important pathway for respiratory diseases to be codriven.
Bidirectional MR analysis further explores possible genetically related causal relationships between multiple diseases. 57Asthma affects COPD, IPF and snoring while COPD and IPF threaten LC.The MR results demonstrated that the relationship between the five respiratory diseases was interactive and intricate.This provides evidence of the effectiveness of clinical prevention of respiratory disease complications.
This study reveals the mechanism of co-morbidity in respiratory diseases and provides a theoretical basis for the diagnosis and prevention of multiple comorbidities in clinical practice.Our results have identified a number of old and new genetic targets for respiratory diseases that may help in the development of new drug targets or new utilisation of existing drugs.The limitation that has to be recognised is that the GWAS included in this study were all from a single European population so the results may not necessarily match other ancestries.More subsequent
Open
GWASs of other ancestries are needed to validate this result or to mine new applicable loci.Proof of this result or exploration of new applicable loci will require subsequent GWAS of other ancestries to validate.Furthermore, the presence of excessively high LD relationships may obscure the accurate identification of causal SNPs in colocalisation analysis (specifically at loci 90, 123, 132, 135 and 139), necessitating further validation.
CONCLUSION
In conclusion, this study reveals the presence of comorbid genetic correlations between five respiratory diseases, identifies pleiotropic loci, defines common biological mechanisms dominated by immune responses and infers potential causal relationships between the diseases.We suggest that respiratory diseases are not entirely independent but are closely linked and share specific genetic loci.These findings provide strong evidence for further therapeutic strategies and risk prediction.
Figure 3
Figure 3 Manhattan plot of pleiotropic loci Manhattan plot of pleiotropic loci analysed by the CPASSOC method, with the x-axis denoting chromosomal location and the y-axis denoting the −log10 p value.The horizontal line indicates the genome-wide significance threshold of p=5×10 −8 .156 pleiotropic loci were identified at the genome-wide significance level, of which, 15 were colocalised loci (black dots represent index SNPs of pleiotropic loci and red dots represent index SNPs of colocalised loci).CPASSOC, Cross-Phenotype Association; SNP, single-nucleotide polymorphism.
Figure 2
Figure 2 Genetic correlation among five respiratory diseases.(A) Global genetic correlations among five respiratory diseases were explored with LDSC and HDL methods.**p adjusted <0.05.(B) Frequency distribution of localised genetic correlations for five respiratory diseases determined by the LAVA method.(C) High consistency of the LDSC and HDL methods for investigating global genetic correlations.(D) Counts of trait pairs with local genetic associations in specific regions of the chromosome.COPD, chronic obstructive pulmonary disease; HDL, highdefinition likelihood; IPF, idiopathic pulmonary fibrosis; LC, lung cancer; LDSC, linkage disequilibrium score regression.
Figure 4
Figure 4 Biological functional and tissue and single cellspecific enrichment of candidate pleiotropic genes.(A) Top five pathways most significantly enriched for GO and KEGG gene sets.(B) Tissue-specific enrichment analysis using the PCGA (based on GTEx and PanglaoDB) identified top five significantly enriched tissues and single cell (p adjusted <0.05).BP, biological process; CC, cellular component; KEGG, Kyoto Encyclopaedia of Genes and Genomes; GO, Gene Ontology; GTEx, Genotype-Tissue Expression; KEGG, Kyoto Encyclopaedia of Genes and Genomes; MF, molecular function.
Figure 5 A
Figure5A bidirectional causal effect estimated with random effects IVW method.Error bars represent the 95% CI of the corresponding MR estimates.P adjusted, p value after corrected using false discovery rate; P.pleiotropy, the resultant pleiotropy remained significant after sensitivity analysis.COPD, chronic obstructive pulmonary disease; IPF, idiopathic pulmonary fibrosis; IVW, inverse variance weighted; LC, lung cancer. | 2024-06-06T06:17:19.516Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "653e00dc9be559f5d88199ae340d2f59bbf65c61",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenrespres.bmj.com/content/bmjresp/11/1/e002148.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "df85e505897bf0e18870f1916bb777c21b6ad820",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231712737 | pes2o/s2orc | v3-fos-license | Global transcriptome changes of elongating internode of sugarcane in response to mepiquat chloride
Background Mepiquat chloride (DPC) is a chemical that is extensively used to control internode growth and create compact canopies in cultured plants. Previous studies have suggested that DPC could also inhibit gibberellin biosynthesis in sugarcane. Unfortunately, the molecular mechanism underlying the suppressive effects of DPC on plant growth is still largely unknown. Results In the present study, we first obtained high-quality long transcripts from the internodes of sugarcane using the PacBio Sequel System. A total of 72,671 isoforms, with N50 at 3073, were generated. These long isoforms were used as a reference for the subsequent RNA-seq. Afterwards, short reads generated from the Illumina HiSeq 4000 platform were used to compare the differentially expressed genes in both the DPC and the control groups. Transcriptome profiling showed that most significant gene changes occurred after six days post DPC treatment. These genes were related to plant hormone signal transduction and biosynthesis of several metabolites, indicating that DPC affected multiple pathways, in addition to suppressing gibberellin biosynthesis. The network of DPC on the key stage was illustrated by weighted gene co-expression network analysis (WGCNA). Among the 36 constructed modules, the top positive correlated module, at the stage of six days post spraying DPC, was sienna3. Notably, Stf0 sulfotransferase, cyclin-like F-box, and HOX12 were the hub genes in sienna3 that had high correlation with other genes in this module. Furthermore, the qPCR validated the high accuracy of the RNA-seq results. Conclusion Taken together, we have demonstrated the key role of these genes in DPC-induced growth inhibition in sugarcane. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-020-07352-w.
Background
Hormone regulation in plant culturing has been widely used to control the quality of agricultural and horticultural products [1]. Several hormones are known to affect the regulation and co-ordination of plant growth [2]. To date, auxins [3], gibberellins (GA) [4], cytokinins (CTK) [5], abscisic acid (ABA) [6], ethyne (ETH) [7], and brassinosteroids (BR) [8] have been the most popular hormones for stimulating growth in crops. However, growth performance is not the only parameter that is sought after in the increasing demands made by farmers. For example, with excessive vegetative growth, crops such as cotton and sugarcane can hardly be controlled leading to height irregularities in farmland, which results in low productivity [9,10]. Thus, other regulated chemicals have been introduced as alternatives to inhibit the relevant hormonal pathways.
Mepiquat chloride (DPC) is a well-known chemical that controls organism growth by suppressing the GA pathways [11,12]. As an exogenous plant growth regulator, DPC is a water-soluble substance that can be applied via spraying in farmlands [13]. With low-dose DPC treatment, studies have seen reduced internode elongation and plant height [13,14]. Additionally, recent studies have revealed that DPC could also regulate the synthesis of endogenous hormones, carbohydrates, enzymes, and other organic molecules [15,16]. DPC treatment increased concentrations of chlorophyll, free proline, and soluble proteins, but depressed malondialdehyde levels, contributing to improved resistance to stress [17][18][19]. In addition, DPC promoted the increase of calcium and phosphorus levels in leaves to strengthen their ability to resist disease [20,21]. Theoretically, it does this by regulating CTKs and the synthesis of GAs, as well as controlling the ratios of CTKs:GAs-and DPCmediated rhizogenesis [22]. However, the function and regulatory role of DPC is far from being systematically understood.
Sugarcane is a major agricultural crop for sugar production worldwide [23][24][25]. About 80% of the world's sugar is isolated from sugarcane, making it a critical bioenergy crop [26]. Sucrose is primarily generated in the crop's stem and higher shoot [27,28], and the internode elongation of stems is associated with the deposition of sucrose [29]. In this situation, GA is employed to stimulate internode elongation [30]. However, rapid stem growth may lead to lower sucrose accumulation [31,32]. Therefore, how to achieve an ideal balance for the most productive rate of stem growth is the key question in sugar production. In an attempt at solving this problem, DPC was introduced to control the negative effects of GA treatment [33]. Although DPC is widely recognized as a regulator of GA and promotes resistance to stress [34,35], its underlying molecular mechanism is still unknown. Moreover, to venture into this knowledge would require thorough scanning of the systematic regulation of DPC in plants.
A previous study showed that during internode elongation, regulation by the microRNA-mRNA network in zeatin biosynthesis, nitrogen metabolism, and plant hormone signal transduction pathways played a part in stem growth in sugarcane [36,37]. These effects may be mediated by GA20-oxidase (GA20-OX1) and a gibberellin receptor (GID1). DPC has shown inhibitory effects on GA generation by suppressing the activities of copalyl diphosphate synthase and ent-kaurene synthase [13]. These results revealed the molecular mechanism in controlling growth performance by DPC. However, a vast amount of information about the roles of DPC in growth and resistance to stress remains unknown. Herein, we used the mathematical method, weighted gene coexpression network analysis (WGCNA), to identify key gene networks and hub genes [38][39][40]. The present study focused on the transcriptome changes induced by DPC treatment using the Illumina HiSeq 4000 platform. The evidence presented here provides new insights on DPC function in controlling stem growth as well as regulating resistance to stress, which are the two most economically important traits in sugarcane.
Growth performance
The growth performance of each group at different days were shown in Fig. 1a. At the beginning of the experiment (0 days), no significant difference was found between the control and DPC groups (P> 0.05). However, the sugarcane heights on days 3, 6, and 12 as well as that of mature sugarcane, were significantly higher in the control than in the DPC groups (P< 0.05) (Fig. 1b). Contrary to the sugarcane height, the growth rates of DPC groups were significantly lower on days 3, 6, and 12 when compared to the control (P< 0.05) (Fig. 1c). Moreover, all the internodes were significantly longer in the control group (Fig. 1d).
Full-length transcriptome of sugarcane
To generate a high-accuracy reference for read mapping data, full-length mRNA sequencing was performed using the PacBio Sequel platform on internodes from mature sugarcane. A total of 17 billion raw reads were obtained. The average length was 2718 bp and N50 was 3011 bp. After circular-consensus sequence (CCS) extraction, 428, 444 reads were identified. Among these reads, 348,840 (81.42%) were full-length reads containing 5′ adaptors, poly(A) tail signals, and 3′ adaptors. Meanwhile, 999 million full-length non-chimeric (FLNC) reads with an average length of 2906 bp were identified. These FLNC reads from the cDNA library contain repetitive isoforms that provide data for analysis of isoforms by alignment and assignment to different clusters. The present fulllength transcriptome generated 72,671 isoforms. Of these, the average length was 2888.94 bp and the N50 was 3073 (Additional file 2).
The isoforms were annotated by aligning the protein and nucleotide databases. In total 69,803, 56,843, 47,438, and 30,240 isoforms were annotated from nr, Swissport, KOG, and KEGG, respectively. Combining these results, a total of 69,867 isoforms were annotated (Additional file 3). The isoforms were also aligned to different species. The five species with the most hit sequences were Saccharum spontaneum, Setaria italica, the Oryza sativa Japonica group, Dichanthelium oligosanthes, and Sorghum bicolor. In addition to this, these isoforms were annotated by GO terms assigned to three categories: biological process (50,805 isoforms), cellular component (32,922 isoforms), and molecular function (26,696 isoforms). In the biological process category, metabolic process (13,462 isoforms) and cellular process (12,836 isoforms) were the two most functional terms. Cell (7598 isoforms) and cell parts (7597 isoforms) were the two most functional terms in the cellular component category, while in the molecular function category, catalytic activity (13,086 isoforms) and binding (11,642 isoforms) were the two most functional terms (Fig. 2c).
DEGs by DPC treatment
The 150 pair-end reads were obtained for DEG analysis. In total, 1,404,530,300 raw reads were generated from 18 cDNA libraries using the Illumina HiSeq 4000 platform. After trimming the adaptor and removing the lowquality reads, 1,380,323,402 (98.28%) reads were retained as high-quality clean reads. These clean reads were mapped to the reference as the full-length transcriptome. The mapping ratios for the 18 cDNA libraries ranged from 73.97 to 83.78%. Using these data, the normalized expression data were calculated and normalized gene expression was analyzed by PCA (Fig. 3a). Two clusters were clearly defined by PCA, which contained the DPC group and control for each cluster. The first principal component, PC1, summarized 30.7% of the whole variability and discriminated samples according to the treatment. The second principal component, PC2, and the third principal component, PC3, summarized 25.1 and 17.4% of the whole variability and discriminated samples, respectively. The DEG analysis showed that the comparison between C2 and D2 groups had the most DEGs (a total of 6012 genes, which contained 3227 upregulated genes and 2785 downregulated genes). D1 showed more upregulated genes compared to D2 and D3 groups, while less downregulated genes were found in D1 than in D2 and D3 groups. In addition, most DEGs in C2-vs-D2, C1-vs-C2 (2895 DEGs), and D1-vs-D2 (3157 DEGs) also showed a large number of differentially expressed genes (Fig. 3b).
Functional analyses of DEGs between C2 and D2 groups
To illustrate the functions of the DEGs after DPC treatment, GO enrichment and KEGG enrichment analyses of the comparison of C2 and D2 with the most DEGs were performed. The upregulated and downregulated genes were annotated in 29 and 37 GO terms, respectively ( Fig. 4a, b). The GO enriched terms with the four most upregulated genes were DNA metabolic process, negative regulation of biological process, regulation of translation, and regulation of cellular amide metabolic process. Meanwhile, the GO enriched terms with the two most downregulated genes were single-organism transport and single-organism localization (Additional file 4). KEGG enrichment analysis showed that 17 and 30 pathways were enriched in the upregulated and downregulated genes, respectively (Fig. 5a, b). Either for the upregulated genes or downregulated genes, metabolic pathways and biosynthesis of secondary metabolites were the top two enrichment KEGG pathways with the most genes. Among the upregulated genes, 55 were found to increase in the plant hormone signal transduction pathway. Meanwhile, phenylpropanoid biosynthesis, flavonoid biosynthesis, favone and flavonol biosynthesis, and glucosinolate biosynthesis were enriched in the downregulated genes (Additional file 5). These KEGG pathways were associated with the growth and development of internodes.
WGCNA and hub genes
The WGCNA divided the genes into 36 modules (Fig. 6). Based on the identification of DEGs, we focused on the D2 group. This group contained significant gene expression changes, which is the crucial stage for internode elongation. We found that sienna3 was the module that most significantly correlated with the D2 stage (p=1e-4) (Additional file 6) (Fig. 7). The sienna3 module contained 33 genes and the top three hub genes, namely Stf0 sulfotransferase, cyclin-like F-box, and HOX12, were identified in this module. These three hub genes correlated with 30 genes (Additional file 7) (Fig. 8).
Validation of RNA-seq result qPCR was used to validate the RNA-seq results. Randomly, nine genes were selected for the analysis. Except for GID2 and PBS1, the other six tested genes, GA2OX1, GID1, MPK4, CML49, PRPF8, and ACO2, showed similar qPCR results to those of the RNA-seq. Moreover, the expression trend of six out of eight genes from qPCR and RNA-seq was highly consistent, indicating that the majority of genes had the same tendency (Fig. 9). The three hub genes, Stf0 sulfotransferase, cyclin-like F-box, and HOX12, were also analyzed by qPCR, and the results were similar between both qPCR and RNA-seq (Fig. 10). These results showed the high reliability of the RNA-Seq data.
Discussion
Sugarcane is the main source of sugar in the industry, accounting for 79% of the sugar production worldwide. Attempts at developing techniques for controlling the growth of sugarcane, accelerating the yields, and culturing biotechnology for sugarcane resulted in varied uses of GA and DPC. These are two chemicals that regulate plant growth in sugar farming with different effects. GA stimulates sugarcane internode elongation by regulating the genes associated with zeatin biosynthesis, nitrogen metabolism, and plant hormone signal transduction pathway [41], while DPC suppresses sugarcane growth. However, compared to the clear mechanism of GAstimulated growth, the molecular mechanisms of DPC are unclear. Thus, in the present study, we focused on the transcriptomic regulation by DPC on sugarcane and discussed the key genes that mediate its growthsuppressive effect.
First, to obtain a high-quality reference for gene annotation, we generated a full-length transcriptome from sugarcane, which was sequenced using the PacBio Sequel platform, thereby generating 72,671 isoforms. Compared to Illumina platforms, the PacBio Sequel platform could gain longer transcripts, which is an advantage in the construction of high-quality references for short sequence analysis. The present study generated reads with N50 at 3011 bp. These long reads guarantee longer contigs and isoforms for subsequent transcriptome analysis [42]. Notably, it turns out that the N50 was 3073 for the isoforms in the present study. Sugarcane is a widely cropped plant and to date, a large number of different varieties have been developed. Of these are the Guitang varieties developed from Guangxi, which have become a series of varieties planted in southern China [43]. GT42, belonging to the Guitang varieties, is a new breeding line with higher sugar productivity [43]. Although the genome of sugarcane was reported on until 2018, the genome data may differ among varieties [44]. Our study is the first to report the full-length transcriptome of GT42. It is our belief that these data would accelerate the studies on new high-yielding crops and provide a highquality reference when analyzing the Illumina short reads. They also provided a chance to illustrate the function of internodes in GT42. Notably, the most abundant GO term regarding the biological process of GT42 isoforms, included metabolic process and cellular process. Thus, this functional isoform showed similar assignment of function to previous results from sugarcane [44][45][46]. Based on these data, GT42 had a functional constitution similar to that of other sugarcane varieties. The present full-length transcriptome was the first to generate general information on GT42 and provided a high-quality reference transcriptome for further investigation of this variety.
DPC is one of the most successful and widely used chemicals for regulating plant growth. Its application has been shown to reduce internode length and leaf size in cotton and sugarcane [12]. The present study also suggested that DPC inhibited internode length in GT42, which was similar to previous results. After understanding the effects of DPC on internode growth, the next question is to determine the molecular mechanism of the function of DPC in sugarcane. In doing so, we used RNA-seq to show the whole profile of gene expression regulation. Using the HiSeq technique, we obtained millions of short reads to reveal the expression in different stages induced by DPC treatment. Thanks to the highquality full-length transcriptome data, the mapping ratios for these libraries covered 73.97 to 83.78%. The comparison between C2 and D2 had the most DEGs, which was 6012 genes. This number of DEGs was much higher than that in C1-vs-D1 and C3-vs-D3, suggesting that the gene expression changes between the control and DPC treatment were mainly in the second stages; namely, after six days post application via spraying. In a study on cotton spraying with DPC, the 96 h post spraying significantly had the most DEGs compared to the 48 h and 72 h stages. From this, it seems that DPC resulted in changes in gene expression over the long-term course of four to six days. Gene expression regulation by DPC is not an acute effect. After 10 days, the effects of DPC on gene expression were diminished. We supposed that the most effective period of DPC-regulated gene expression was six days.
The KEGG enrichment analysis showed that the expression levels of 55 genes in the plant hormone signal transduction pathway had increased from DPC treatment. Internode growth is controlled by several hormonal genes, such as G biosynthesis genes, auxin-related genes, and ethylene genes. It has been reported that GA treatment can significantly upregulate these genes, while DPC may suppress hormone expression. Specifically, in Agapanthus praecox, auxin-related genes were shown to be inhibited by DPC treatment [47]. Surprisingly, the present study also indicated that DPC increased the expression levels of several hormonal genes. This difference may be due to the different species examined. Therefore, sugarcane may have a different response to DPC at the molecular level. We also found that several key pathways could be downregulated by DPC, such as phenylpropanoid biosynthesis, flavonoid biosynthesis, favone and flavonol biosynthesis, and glucosinolate biosynthesis, which were enriched. The phenylpropanoid pathway provides metabolites for plant growth, which contributes to the requirement of lignin biosynthesis [48]. Moreover, favone, flavonol, and glucosinolate are key metabolites for internode growth [49,50]. Flavonol biosynthesis could be affected by light intensity and, in previous studies, led to different growth appearances in Ginkgo (Ginkgo biloba) [51]. Meanwhile, the glucosinolate concentration, influenced by sulfur and nitrogen supplementation, was associated with the growth of broccoli [52]. The downregulation of genes in these pathways may lead to the shortening effects of sugarcane internodes.
To determine the key gene modules and hub genes from the effects of DPC treatment, WGCNA was performed. In this sienna3, 33 genes were found highly correlated with the three hub genes. Therefore, the most critical genes play a key role in the module. Hub genes are the genes that correlate with other genes in expression levels, which could be identified by mathematical methods. The top three identified in this study were Stf0 sulfotransferase, cyclin-like F-box, and HOX12. Stf0 belongs to the sulfotransferase family, which affects root development processes, elongation growth, and gravitropism [53]. In several plants, including Medicago truncatula, Lotus japonicus, and Arabidopsis thaliana, cyclin-like F-box genes were expressed in all the tissues containing highly active dividing cells. Knockdown of this gene resulted in the accumulation of CYCB1:1, suggesting that the cyclin-like F-box gene could regulate the cell cycle in dividing cells [54]. Furthermore, it has been reported that HOX12 regulates panicle exsertion via modulating EUI1 gene expression [55]. These three hub genes were correlated with the other genes in the si-enna3 modules. Based on this information, it could be concluded that Stf0 sulfotransferase, cyclin-like F-box, and HOX12 mediated a gene group and constituted a gene network that contributed to the DPC-induced effects on sugarcane growth.
Conclusion
In summary, the full-length GT42 transcriptome was first reported in this study, thereby providing an informative resource for sugarcane breeding and transcriptome analysis. RNA-seq suggested that the main effects of DPC on sugarcane gene expression occurred six days post spraying. Furthermore, the significantly Fig. 1 Effects of DPC on sugarcane growth performance on different days after treatment. a The growth performance of sugarcane in 0, 3, 6, and 12 days from control and DPC groups. b The height of sugarcane on different days after DPC treatment (n = 4). c The growth rate of sugarcane on different days after DPC treatment (n = 4; mature period, n = 10). d The internode length of sugarcane in mature sugarcane after DPC treatment. * indicates P< 0.05 enriched gene function categories contained several pathways related to internode growth, including multiple pathways that participated in the production of metabolic products. Additionally, the gene modules included 33 genes that were highly correlated with the stage of six days post spraying in the DPC group, showing a potential role in the response to DPC. Among these genes, Stf0 sulfotransferase, cyclin-like F-box, and HOX12 were hub genes that may regulate all the other genes in this module. Further studies should focus on determining the function of these key genes in detail, especially with regards to controlling internode growth affected by DPC.
Sugarcane preparation
All the sugarcane samples used were bred at the Sugarcane Research Institute (SRI), Guangxi Academy of Agricultural Sciences in Nanning, China. The sugarcane variety, GT42, was sourced from the SRI Experimental Farm in Nanning, China. The team selected 10-monthold cane stalks to obtain buds in the middle internodes, which were then cut into setts from a single bud. The setts were incubated at 52°C for 30 min to eliminate pathogenic bacteria and subsequently were planted in a moist sandbox and maintained in an artificial climate box (Essenscien, USA). The culturing conditions were as follows: temperature 28.0±0.1°C, humidity: 75±1.5% RH, photoperiod 12 h light and 12 h dark with 100% full light (light intensity 25,000 lx). Once the seedlings grew their first two leaves, they were transferred to plastic pots (35 cm width × 35 cm length × 50 cm height); in each pot, two seedlings were planted. After five days, the seedlings were randomly divided into two replicates. The seedlings were cultivated to the pre-elongation stage, which contained 9-10 leaves, defined as the early elongation stage. In this stage, the DPC group was sprayed with 200 mg/L DPC (Solarbio Life Science, Beijing, China) until the solution began to drip from the leaves. Water was sprayed on the control group in a similar pattern. All the sugarcane pots were placed in a greenhouse in 18 rows with 1.2 m width. The first three columns belonged to the control group and the last three columns belonged to the DPC group. At 3, 6, and 12 days post spraying, the third internodes were collected for further assays. Control samples from 3, 6, and 12 days post spraying were named C1, C2, and C3, respectively. Similarly, the samples of the DPC group from 3, 6, and 12 days post spraying were named D1, D2, and D3, respectively. All samples were stored at − 80°C until RNA isolation. For each group, at different time points, three biological replicates were collected for analyses.
Determination of growth performance
Sugarcane growth performance was measured in the control and DPC groups. At 3, 6, and 12 days post spraying, we measured the stalk height from the soil surface to the dewlap of the youngest fully expanded leaf, as well as the length of the internodes. For each group, five plants were randomly chosen for measurement. The whole height and the first seven internode lengths (from the shoot apex of 10 matured plants) were measured as well.
PacBio Iso-Seq
To obtain an accurate reference for the genes in sugarcane, full-length transcriptome sequencing was performed. RNA libraries of internodes from one mature sugarcane at 10 months of age were prepared. The mRNAs were first enriched with oligo (dT) magnetic beads, and the full-length cDNAs were synthesized using Clontech SMARTer PCR cDNA Synthesis Kit (Pacific Biosciences, USA). From this, three libraries with different lengths (1-2 kb, 2-3 kb, and 3-6 kb) were constructed. Sequencing was performed on a PacBio Sequel System (Pacific Biosciences, USA) and the raw sequences were analyzed using SMRT Link v5.0.1 software. Based on the primer at 5′ and 3′ as well as ploy-A; the fulllength, non-full-length, chimeric, and non-chimeric categories were identified. The non-full-length sequences were polished using the Quiver algorithm, while the Illumina RNA-seq data were used to correct the low-quality sequences. The sequences were annotated using the nr, SwissProt, COG/KOG, GO, and KEGG pathways, and the unannotated sequences were further used for CDS prediction.
Preparation of RNA-seq libraries
Total RNA from three plants in each group was isolated using RNA Trizol (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions. A total of six RNA-seq libraries (three from the control group and three from the DPC group) were prepared for nextgeneration sequencing. The quantity and integrity of the total RNA was assayed using an Agilent 2100 bioanalyzer (Agilent, Santa Clara, CA, USA). The mRNAs were enriched by oligo (dT) magnetic beads and fragmented using fragmentation buffer. First-strand cDNA was synthesized using random primers 6-bases long. The second-strand cDNA was then synthesized using DNA
Transcriptome mapping and differentially expressed gene (DEG) identification
The sequencing adaptor was first trimmed, then lowquality reads with unknown nucleotides (N) ratio > 10% or Q-value ≤20 were removed. The retained reads were high-quality clean reads that were used for the following analyses. The clean reads were mapped to the reference transcriptome sequence using the full-length transcriptome by TOPHAT (version 2.0.9) [56], and the relative gene expression was calculated and normalized by Fragments Per Kilobase of transcript per Million mapped reads (FPKM). Furthermore, principal component analysis (PCA) was performed using the R package (http:// www.r-project.org/) to evaluate the reproducibility of the biological replicates. When the genes with a false discovery rate (FDR) < 0.05 and log 2 (fold change) > 1 or <− 1 were compared between the control group and DPC group, the genes were identified as DEGs.
Functional annotation of DEGs
To define the function of DEGs, enrichment analysis of Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were performed. The DAVID online tools (http://david.ncifcrf.gov/) were employed for the enrichment analysis. The GO items with adjusted P≤0.001 and KEGG pathways with P≤0.001 were considered to be significantly enriched.
WGCNA
The WGCNA was performed to identify key gene groups and hub genes based on the FPKM using the R package [57]. The data were first filtered according to the 25% variation based on variance (Standard Deviation/Mean) across samples. Further, the FPKM matrix of the retained genes was used to create a weighted adjacency matrix. The soft threshold power (β) set at 10, was selected to perform scale-free topology. The parameters for construction of the gene module were power = 8, minimum module size = 30, and branch merge cut height = 0.25. The correlations between gene modules and treatment groups were evaluated using correlation | 2021-01-27T06:16:51.035Z | 2020-08-12T00:00:00.000 | {
"year": 2021,
"sha1": "b98e49d60b1afc139820da5e4a1a1947f4e5d931",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-020-07352-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd243cfc5b620844f6b5763c15ba5cefcab0dbdf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10014515 | pes2o/s2orc | v3-fos-license | Immunohaemostasis: a new view on haemostasis during sepsis
Host infection by a micro-organism triggers systemic inflammation, innate immunity and complement pathways, but also haemostasis activation. The role of thrombin and fibrin generation in host defence is now recognised, and thrombin has become a partner for survival, while it was seen only as one of the “principal suspects” of multiple organ failure and death during septic shock. This review is first focused on pathophysiology. The role of contact activation system, polyphosphates and neutrophil extracellular traps has emerged, offering new potential therapeutic targets. Interestingly, newly recognised host defence peptides (HDPs), derived from thrombin and other “coagulation” factors, are potent inhibitors of bacterial growth. Inhibition of thrombin generation could promote bacterial growth, while HDPs could become novel therapeutic agents against pathogens when resistance to conventional therapies grows. In a second part, we focused on sepsis-induced coagulopathy diagnostic challenge and stratification from “adaptive” haemostasis to “noxious” disseminated intravascular coagulation (DIC) either thrombotic or haemorrhagic. Besides usual coagulation tests, we discussed cellular haemostasis assessment including neutrophil, platelet and endothelial cell activation. Then, we examined therapeutic opportunities to prevent or to reduce “excess” thrombin generation, while preserving “adaptive” haemostasis. The fail of international randomised trials involving anticoagulants during septic shock may modify the hypothesis considering the end of haemostasis as a target to improve survival. On the one hand, patients at low risk of mortality may not be treated to preserve “immunothrombosis” as a defence when, on the other hand, patients at high risk with patent excess thrombin and fibrin generation could benefit from available (antithrombin, soluble thrombomodulin) or ongoing (FXI and FXII inhibitors) therapies. We propose to better assess coagulation response during infection by an improved knowledge of pathophysiology and systematic testing including determination of DIC scores. This is one of the clues to allocate the right treatment for the right patient at the right moment. Electronic supplementary material The online version of this article (10.1186/s13613-017-0339-5) contains supplementary material, which is available to authorized users.
Background
The aim of this review is to describe the battle between a foreign pathogen and the host regarding thrombin generation, one of the key molecules to win or to lose the war for surviving. Thrombin is involved in thrombus formation (via fibrin network), in anticoagulation and fibrinolysis [via thrombomodulin and (activated) protein C], focalisation (via glycosaminoglycans and antithrombin), but also in vascular permeability and tone (via endothelial cell receptors and kinin pathways) [1][2][3].
Haemostasis should therefore be considered as a nonspecific first line of host defence-at least when localised to a unique endothelial injury-considering the growing role of platelets as immune cells [11][12][13]. This immune response has been called "immunothrombosis" [14]. In this line, immunohaemostasis process may help to capture pathogens, prevent tissue invasion and concentrate antimicrobial cells and peptides including thrombinderived host defence peptides. Therefore, when regulated, a low-grade activation of thrombin generation may help survive the bacterial challenge [14]. Yet, inhibition of thrombin generation by Dabigatran promotes bacterial growth and spreading with increased mortality in experimental model of Klebsiella pneumoniae-induced murine pneumonia [15]. On the other hand, thrombin can become deleterious if ongoing activation of the coagulation, owing to defective natural anticoagulants, leads to excessive thrombin formation. Combined with defective fibrinolysis, thrombin results in fibrin deposits in microvessels and eventually in disseminated intravascular coagulation (DIC) [16,17]. DIC thus represents a deregulation and/or an overwhelmed haemostasis activation response triggered by pathogens and/or host responses during septic shock [14]. DIC could be classified in "asymptomatic", "bleeding" (haemorrhagic), "thrombotic" (organ failure) and ultimately "massive bleeding" (fibrinolytic) type, according to its clinical presentation [18]. Except asymptomatic one, all types are characterised by delayed clotting times (PT and aPTT), low fibrinogen and platelets count owing to their consumption [19,20]. Although known for many years, the role of DIC in the pathogenesis of septic shock remains a matter of debate [21][22][23]. Since then, coagulation was considered as a potential therapeutic target. The recognition of new targets implied in thrombosis-but not in haemostasisopens a new window over innovative therapies.
Physiology of thrombin generation
For didactic settings, haemostasis can be separated into three phases: i. Initiation, ii. Propagation and regulation, iii. Fibrinolysis.
A brief overview of haemostasis is available in Additional file 1 and Additional file 2: Figure S1 provides the different steps of thrombin generation, fibrin formation and regulation [1,24].
Pathophysiology of thrombin and fibrin formation during infection
The contact between a prokaryote and a eukaryote can result in symbiosis or infection resulting in host or pathogen survival. To survive infection, the host initiates a complex inflammatory response including innate immunity, complement and coagulation pathways. These two cascades have a unique origin, but many refinements over the past 500 million years improved their specificities [25,26]. In this view, coagulation is fundamental to survive and the following section will highlight the role of contact activation system (not involved in "normal" haemostasis), the interplay between pathogens, coagulation and fibrinolysis pathways, and the emerging role of antimicrobial host defence peptides generated by proteolysis of "coagulation" proteins [17,27,28].
Initiation: the emerging role of contact activation system ( Fig. 1)
Physiology or pathophysiology?
An old view of haemostasis distinguished two initiation pathways: tissue factor ("extrinsic" pathway) and contact activation system (CAS) ("intrinsic" pathway). The latter requires a "contact" activator, prekallikrein (PK), high molecular weight kininogen (HK), factor XII (FXII) and FXI [29]. A deficit of one of these proteins results in prolonged aPTT although no haemorrhagic diathesis is evidenced in patients. CAS does not seem to be involved in "normal" haemostasis and may be restricted to pathological conditions resulting in negatively charged surfaces, including sepsis (via NETs and polyP), but also acute respiratory distress syndrome (ARDS) [30] and blood contact with artificial surfaces (intravascular catheters, extracorporeal circuits).
"Contact" activator is a negatively charged surface able to link and induce a conformational change in FXII that auto-activates FXII in α-FXIIa in the presence of Zn 2+ . Then α-FXIIa converts PK to kallikrein (KAL) that enable a reciprocal hetero-activation of α-FXII, leading to large amount of β-FXIIa and thereafter platelet GP Ib -bound FXI activation. β-FXIIa is also able to activate the classic complement system pathway via C1r and to a lesser extent C1 s linking haemostasis and complement-mediated host defence [3].
CAS and PK also activate fibrinolysis and tissue proteolysis. HK linked to urokinase-type plasminogen activator receptor (uPAR) is able to activate pro-uPA into uPA that in turn activates plasminogen into matrix-bound plasmin. Moreover, BK induces tPA release by endothelial cells when linked to B1R [2].
Besides and related to CAS, the kallikrein/kinin system (KKS) is also activated [3]. CAS and PK also activate fibrinolysis and tissue proteolysis and are regulated by serpin C1 esterase inhibitor (C1-INH). A deficit (responsible for hereditary angioedema) or consumption (during septic shock but also after extracorporeal circulation) is responsible for increased permeability syndrome [31].
Polyphosphates (polyP)
PolyP are negatively charged inorganic phosphorous residue polymers, highly conserved in prokaryotes and eukaryotes. They are important source of energy, but are also involved in cell response. Half-life of polyP is very short due to their degradation by phosphatases [32,33].
Medium-size soluble polyP are released by activated platelets and mast cells. They are able to induce FXII activation only if large amounts are present [34,35]. PolyP 60-80 could also bind α-FXIIa preventing further degradation, resulting in prolonged half-life. In the presence of fibrin polymers associated with polyP 60-80 , α-FXIIa can activate fibrin-bound plasminogen in plasmin, resulting in "intrinsic" fibrinolytic activity overcoming antifibrinolytic properties [36,37]. Interestingly, activated platelets could retain polyP 60-80 on their surface assembled into insoluble spherical nanoparticles with divalent metal ions (Ca 2+ , Zn 2+ ). These nanoparticles provide higher polymer size and become able to trigger contact system activation [38,39].
On the other hand, large-sized insoluble polyP are released by bacteria and yeasts. PolyP are able to support auto-activation of FXIIa and to promote thrombin generation independently of FXI activation. PolyP can bind FM resulting in clots with reduced ) released by bacteria. Both are "contact" activators, i.e. a negatively charged surface able to link and induce a conformational change in FXII that auto-activates FXII in α-FXIIa in the presence of Zn 2+ . Then α-FXIIa converts PK to kallikrein (KAL) that enables a reciprocal hetero-activation of α-FXII, leading to large amount of β-FXIIa and thereafter platelet GP Ib -bound FXI activation. Large amount of FXIIa generated is able to convert platelet-bound FXI into FXIa involved in thrombin generation and fibrin generation. Interestingly, neutrophil elastase (NE) released with NETs is also able to enhance platelet adhesion and activation (inactivation of ADAMTS13) and coagulation with inhibition of tissue factor pathway inhibitor (prolonged tissue factor-induced initiation) and thrombomodulin (impaired activation of protein C). Moreover, polyP enhances activation of platelet-bound FXI by FXIIa and can be incorporated in the fibrin network, reinforcing its structure. On the other hand the kallikrein/kinin system (KKS) is also triggered. FXIIa and KAL convert high molecular weight kininogen (HK) in biologically active bradykinin (BK). BK is not involved in thrombin generation, but mainly in inflammatory response via two G-coupled receptors, B1R and B2R. BK results in increased vascular permeability, vasodilation (mediated by both PGI 2 and nitric oxide after iNOS induction), oedema formation and ultimately hypotension stiffness and increased deformability [40]. Moreover, polyP are incorporated in fibrin mesh, inhibiting fibrinolysis [34].
Neutrophil extracellular traps (NETs)
Neutrophils have long been considered as suicidal cells killing extracellular pathogens. Few years ago, biology of neutrophils has evolved for a more complex network linking innate immunity, adaptive immunity and haemostasis [41][42][43]. Neutrophils do not only engulf pathogens (phagocytosis) and release granules content, but also release their nuclear content, essentially histones and DNA fragments resulting in a net. These NETs support histones and other granule enzymes like myeloperoxidase (MPO) and neutrophil elastase (NE). These fragments are called NETs for neutrophils extracellular traps, and they enable to trap pathogens and blood cells, including platelets, in their meshes [44].
NETosis plays a critical role in host defence through innate immunity, but also through other procoagulant mechanisms: i. Negatively charged DNA constitutes an activated surface for coagulation factors assembly, including contact phase; ii. Enzymatic inhibition of tissue factor pathway inhibitor (TFPI) and thrombomodulin (TM) by neutrophil elastase; iii. Direct recruitment and activation of platelets by histones [14].
Recent data support a direct activation by DNA and histones more than NETs themselves [49]. High levels of circulating histones have been evidenced in septic shock. Histone infusion induces intravascular coagulation with thrombocytopenia and increased D-dimers. Antihistone antibodies can prevent both lung and cardiac injuries in experimental models. C-reactive protein can bind histones and reduce histone-induced endothelial cell injury. C-reactive protein infusion rescues histone-challenged mouse [50]. (Table 1)
Outer membrane proteins (omptins) are surfaceexposed, transmembrane β-barrel proteases exposed by some gram-negative bacteria. They display fibrinolytic and procoagulant activities required for pathogenicity [71,72]. Yersinia pestis is the agent of bubonic and pneumonic plague. Both associate haemorrhagic and thrombotic disorders and the presence of Pla, a direct activator of host plasminogen, require rough LPS. Pla is also able to promote fibrinolysis by activation of uPA, inactivation of serpins PAI-1 and α 2 -antiplasmin and by cleavage of C-terminal region of TAFI with reduced activation by thrombin-thrombomodulin complex [73,74]. Pla is also able to cleave TFPI. Interestingly, dysplasminogenemia (Ala 601 → Thr), present in about 2% of the Chinese, Korean and Japanese populations, confers a protection against plague. Homozygous individuals have a reduced plasminogen activity about 10% with fewer thrombotic events, but enhanced survival during infection by Y. pestis but also by group A streptococci and S. aureus requiring plasminogen activation for pathogenicity [75].
Inactivation of fibrinolysis
Inhibition of fibrinolysis is another way to promote clot stabilisation [77,78].
Inhibition of coagulation
Bacteria can also block contact activation pathway [79,80] or thrombin generation [81] in order to prevent host defence.
Host defence peptides
Innate immunity is mediated by cell activation via Tolllike receptors (TLRs). Resulting cationic and amphipathic small peptides (15-30 amino acids, < 10 kDa) have many biological properties including direct bactericidal effects, but also immunomodulation and angiogenesis. They have been named "host defence peptides" (HDPs) or "antimicrobial peptides" (AMPs).
In eukaryotes, we can identify defensins (disulphidestabilised peptides) and cathelicidins (α-helical or extended peptides). HDPs can be classified into three categories regarding their target on prokaryotes: i. Plasma membrane-active peptides disrupting membrane integrity, ii. Intracellular inhibitors of transcription or translational factors and iii. Cell wall-active peptides interfering with cell wall synthesis and bacterial replication [82].
Limited proteolysis of many proteins involved in blood coagulation (activators as well as inhibitors) is now recognised as HDPs and may participate to host defence. Interestingly, the development of synthetic HDPs is a new therapeutic anti-infectious strategy regarding resistance of pathogens to (conventional) antibiotics [83].
Serine protease-derived peptides
Human serine proteases (including vitamin K-dependent blood coagulation factors and kallikrein system peptides) can be cleaved by proteases to generate C-terminal peptides with direct antimicrobial activities [84]. GKY25 is released from FIIa, FXa and FXIa after cleavage by neutrophil elastase [85]. This peptide is able to slightly reduce P. aeruginosa growth but also to significantly reduce both inflammatory response and mortality [86]. Bacteria are also able, mainly by unknown mechanisms, to generate HDPs from fibrinogen (GHR28) and high molecular weight kininogen (HKH20 and NAT26).
D-Inhibition of coagulation
Group A streptococci Streptococcal inhibitor of complement (SIC) HK Inhibition of HK binding and contact phase activation [79,80] S. aureus Staphylococcal superantigen-like protein 10 (SSLP-10) FII Inhibition of platelet binding and activation [81] by neutrophil elastase after binding to glycosaminoglycan [87], and KYE28 displays antimicrobial properties against gram-negative and gram-positive bacteria but also against fungus [87]. Moreover, KYE28 can bind LPS dampening inflammatory response [88]. FFF21 derived from antithrombin also shares antimicrobial activity after permeabilisation of bacterial membrane [89]. Protein C inhibitor-derived SEK20 peptide displays antimicrobial activity [90]. Interestingly, platelets can bind PCI under activation resulting in high concentration of PCI at site of platelet recruitment as observed during infection [91].
Diagnosis
Activation of the coagulation cascade is a physiologic, innate and adaptive response during infection. This response can be overwhelmed, becoming hazardous and referred to as DIC meaning disseminated intravascular coagulation, as well as "death is coming" [92]. For many years, only two conditions were distinguished: "no DIC" and "DIC". This "schizophrenic" view of haemostasis needs to be reissued, as proposed by Dutt and Toh [93]: "The Ying-Yang of thrombin and protein C". There is indeed a continuum from adaptive to noxious thrombin generation. Moreover, DIC remains a medical paradigm for critical care physicians: clinical diagnosis is often (too) late and biological diagnosis (too) frequent in the absence of clinical signs or therapeutic opportunities [94].
Clinical diagnosis
Most patients with sepsis and septic shock do not present any clinical sign of "coagulopathy", while routine laboratory tests are disturbed. Clinical examination should focus on purpura, symmetric ischaemic limb gangrene (with pulses) [95] and diffuse oozing. A very specific sign is "retiform purpura", which is a netlike purpura reminiscent of livedo. However, unlike classic livedo, in which meshes are erythematous, meshes are here purpuric. The absence of induced bleeding on retrieval when the skin is punctured to a depth of 3 to 4 mm within a livid or purpuric area is a good indication of thrombotic microangiopathy [96].
Laboratory criteria
A single test will never be able to diagnose and stratify sepsis-induced coagulopathy. Only a combination of the presence of underlying disease associated with evidence of cellular activation in the vascular compartment (including endothelial cells, leucocytes and platelets), procoagulant activation, fibrinolytic activation, inhibitor consumption and end-organ damage or failure will allow such diagnosis.
Underlying disease
In sepsis and septic shock, vascular injury is central and prompted by different actors with overlapping kinetics, leading to difficulties in deciphering a sequential order [97]. Acute kidney injury (AKI) is present in about half patients, one-third of non-DIC patients versus four-fifth in DIC patients. This association between AKI and low platelets may be symptomatic of thrombotic microangiopathy (TMA) all the more that Ono et al. [98] reported low ADAMTS13 activity and high UL-vWF in septic shock-induced DIC. Nevertheless, there are two important differences: the presence of schizocytes and the absence of prolonged clotting times in TMAs [99,100].
Hepatic injury is frequent, but remains mild to moderate, with a slight increase in liver enzymes and bilirubin and decrease in PT. On the other hand, severe hepatic ischaemia may lead to fulminant hypoxic hepatitis with very low PT, but also inhibitors AT and PC mimicking DIC with ischaemic limb gangrene with pulses [101].
Cellular activation
Only indirect markers of cellular activation are available; most of them are not routinely assessed. These markers could be soluble molecules (released by shedding or by proteolytic cleavage) or cell-derived microvesicles, including microparticles (MPs). The role of MPs in septic shock and infection has been discussed elsewhere [102][103][104].
Endothelial cells E-selectin (CD62E), or endothelial-leucocyte adhesion molecule-1 (ELAM-1), is only expressed by endothelial cells after cytokine stimulation. CD62E is involved in leucocyte recruitment at site of injury and could be released in the blood stream as free, soluble molecule (sCD62E) or membrane bound after MP shedding (CD62 + -MPs). sCD62E is dramatically increased during septic shock, especially in DIC patients [8], but was not associated with DIC diagnosis in one study [105,106]. Interestingly, CD62 + -MPs were not increased in septic shock due to proteolysis [8].
Endoglin (CD105, Eng) is a membrane protein expressed mainly by endothelial cells in the vascular repair and angiogenesis during inflammation [107]. It contains an arginine-glycine-aspartic acid (RGD) tripeptide sequence that enables cellular adhesion, through the binding of integrins or other RGD binding receptors that are present in the extracellular matrix. Membranebound CD105 is involved in leucocyte α5β1 activation, resulting in leucocyte recruitment and extravasation on the one hand and in angiogenesis on the other hand, whereas MMP-14-cleaved soluble (s)CD105 abolishes extravasation and inhibits angiogenesis [107]. CD105 plays a pivotal role in endothelial cell adhesion to mural cells [108]. Soluble CD105 overexpression is actually linked to other typical systemic and vascular inflammation states, as pre-eclampsia and HELLP syndrome, that are also characterised by a haemostatic activation/deregulation [109] and podocyturia [108]. We evidenced the presence of CD105 + -MPs during septic shock, especially in DIC patients [8,110].
Endothelial cells also release soluble and microparticlebound EPCR. sEPCR is a marker of endothelial injury and severity [111], while EPCR + -MPs can display an anticoagulant and cytoprotective pattern in the bloodstream [112,113].
Leucocytes Neutrophils and monocytes play a major role in sepsis-induced coagulopathy. After stimulation by thrombin and cytokines, monocytes could express TF and promote thrombin generation after cell membrane remodelling and phosphatidylserine (PhtdSer) exposition. Moreover, TF + -MPs of monocyte origin have been identified and could disseminate a procoagulant potential [7].
The role of neutrophils is more complex, involving both TF expression (fusion of TF + -MPs) [114] and NETs [115]. Direct evidence of the presence of NETs in bloodstream is lacking, but histones (or nucleosomes), free DNA and myeloperoxidase could be detected in plasma and are significantly increased in septic shock-induced DIC [116]. Recently, our group showed cytological modification of neutrophils in blood smears of patients with DIC [117]. Moreover, we evidenced neutrophil chromatin decondensation assessed by measuring neutrophil fluorescence (NEUT-SFL) using a routine automated flow cytometer Sysmex ™ XN20 [118].
Platelets Inflammation resulting in systemic inflammatory response syndrome (SIRS) is a potent inducer of both fibrinogen synthesis and platelet circulating pool mobilisation. Platelet count can reach 700-800 G/L, but thrombocytopenia can occur during sepsis. A "normal" valuethat is to say in the normal range-may be interpreted cautiously and represent patent consumption. Moreover, enumeration is not function. During sepsis-induced coagulopathy, platelet activation follows thrombin generation and does not support the propagation phase of haemostasis with impaired P-selectin, ADP, Ca 2+ and cFXIII local supply.
Erythrocytes Schizocytes are fragmented erythrocytes and are the cornerstone of TMA diagnosis. They are frequently observed on blood smears during DIC and remain of poor value for DIC diagnosis [119].
Procoagulant activation
Routine coagulation tests evidence a prolongation of both prothrombin time (PT) and activated partial thromboplastin time (aPTT). Nevertheless, PT is the more accurate. aPTT is only slightly elevated during DIC due to inflammatory response and very high level of FVIII released by injured endothelial cells.
Evidence of thrombin generation can be evaluated by quantification of prothrombin fragment 1 + 2 (F1 + 2) and/or thrombin-antithrombin (TAT) complexes. These tests are not routinely available. Moreover, we evidenced the lack of discrimination of F1 + 2 between DIC and non-DIC patients despite significant differences [8].
Fibrin formation is quantified by fibrinopeptide A (FpA) (with a 2:1 ratio), not available in routine [120]. Soluble fibrin monomers (FM) can be routinely quantified. They do not represent fibrin formation, but resting fibrin monomers not yet polymerised by FXIIIa. High FM can evidence increased production and/or defective polymerisation [121,122]. The accuracy of this biomarker is still matter of debate (see below) [123,124].
Fibrinolytic activation
Fibrin(ogen) degradation products (FDPs) are heterogeneous small molecules generated by the action of plasmin on both fibrin network (secondary fibrinolysis) and fibrinogen (primary fibrinogenolysis). D-dimers (D-domain of two fibrin molecules stabilised by FXIIIa) are specific of fibrinolysis and must be preferred when available [125][126][127]. D-dimers sign thrombin generation, fibrin formation and polymerisation then fibrinolysis, while the absence of D-dimers could represent defective fibrinolysis despite the presence of fibrin. Other markers could be useful but are not available in routine laboratories: PAP (plasmin-antiplasmin complexes), tPA and PAI-1 [128,129]. Both tPA and PAI-1 are dramatically increased during septic shock, regardless of DIC diagnosis. Early inhibition of fibrinolysis during sepsis-induced coagulopathy may cause diagnostic delay regarding the importance of FDPs in DIC diagnosis.
Inhibitors consumption
Sustained thrombin generation leads to activation, then consumption, of regulatory mechanisms. TFPI is decreased during DIC [130]. Antithrombin can be-and should be-routinely assessed during sepsis-induced coagulopathy. The absence of low AT level challenges the diagnosis of DIC [131]. Concerning the TM-APC pathway, assessment is complex. PC is decreased by consumption, but APC is increased, at least at the beginning of sepsis. Moreover, soluble forms of EPCR (sEPCR) [111] and TM (sTM) [132] can be found in plasma of septic patients and are correlated to vascular injury.
Global assessment of haemostasis
Thromboelastography (TEG) and rotational thromboelastometry (ROTEM ™ ) are routinely used in operative theatres to monitor blood coagulation and "assess global haemostasis" [133]. Interestingly, they can also evaluate fibrinolysis at 30 and 60 min. Nevertheless, a recent Cochrane review concluded that there was little or no evidence of the accuracy of such devices, strongly suggesting that they should only be used for research [99,100]. Few data are available regarding septic shock-induced coagulation/coagulopathy. A prospective study comparing septic shock patients, surgical patients and healthy volunteers evidences a hypocoagulability during DIC [134]. In this study, we may hypothesise that DIC patients were in "fibrinolytic" phase.
Scoring systems
Different scoring systems have been developed to ensure DIC diagnosis and are discussed in supplementary data (Additional file 1, Additional file 3: Table S1).
New therapeutic opportunities?
A syllogism precludes anticoagulant therapy during severe sepsis and septic shock: "more severe is the infection, more thrombin is generated", "more thrombin is generated, more organ failure and death supervene", so "more you prevent thrombin generation, more you will improve your patient with severe infection". This view forgets that haemostasis is mandatory to survive sepsis via many pathways, including newly recognised immunothrombosis and HDPs. In fact, "anticoagulant" treatments disrupt a tight equilibrium between pathogen and adaptive host response and may lead to more deaths in a group of patients (adaptive haemostasis) and to fewer deaths in another group (noxious haemostasis). Recognition of "noxious haemostasis" remains a medical paradigm for critical care physicians. Negative therapeutic interventions [135,136], drotrecogin alfa withdrawal [137], but also emerging concept of immunothrombosis [14] could argue for a radical "tabula rasa" regarding coagulation during septic shock. The debate is still open and can be summarised in one question: "Should all patients with sepsis receive anticoagulation?" [138,139]. Finally, whether immunohaemostasis/DIC clinical assessment is reliable remains a major issue (Fig. 2).
A mini-review of current (and past) therapies is provided in supplementary data (Additional file 1, Additional file 4: Table S2, Additional file 5: Table S3 and Additional file 6: Figure S2) regarding: i. limitation of thrombin and fibrin generation, ii. DIC with thrombotic/multiple organ failure pattern, iii. DIC with haemorrhagic pattern.
In the following section, we will present an overview of therapies focused on immunohaemostasis activation.
Inhibition of contact pathway
Contact pathway is not necessary for "normal" haemostasis. FXII(a) and FXI(a) are new targets to develop "safe" antithrombotic drugs without antihaemostatic effects [140][141][142]. Moreover, these drugs could improve hypotension targeting bradykinin release.
C1-inhibitor
C1-inhibitor regulates both complement activation and FXII and could improve both capillary leakage and hypotension on the one hand and contact phase-induced thrombin generation on the other. As other serpins, C1-inhibitor is dramatically reduced in septic shock and C1-inhibitor supplementation could improve patients or renal function in short randomised trials [143][144][145]. Nevertheless, no large randomised trial can support its use. Interestingly, bradykinin receptor antagonist icatibant had no effect on a porcine model of septic shock [146].
FXII blockade
In a baboon model challenged with a lethal dose of E. coli, the monoclonal antibody C6B7 directed against FXIIa improved survival with higher blood pressure. In the treated group, the inflammatory response was reduced with lower IL-6 and neutrophil elastase release as well as complement activation. Inhibition of FXIIa was obvious with reduced BK released and fibrinolysis. Nevertheless, both groups experiment DIC with low platelet count, low fibrinogen and low FV [147]. Another FXIIa monoclonal blocking antibody is 3F7. This antibody seems to be safe as an anticoagulant in experimental extracorporeal membrane oxygenation model, with reduced bleeding compared to heparin, but no data are yet available regarding septic shock [148].
FXI blockade
14E11 is an anti-FXI monoclonal antibody that blocks FXI activation by FXIIa but not by FIIa. 14E11 displays antithrombotic properties. This molecule was used in mouse polymicrobial sepsis. Inflammation and coagulopathy were improved as well as survival after 14E11 treatment up to 12 h after bowel perforation onset. Clotting time was not modified, and no bleeding could be evidenced in this model [149].
Interestingly, FXI KO mice (FXI −/− ) evidence increased inflammatory response with impaired neutrophil functions-but not haemorrhage in lungs-in a model of Klebsiella pneumoniae and Streptococcus pneumoniae pneumonia resulting in an increased mortality. Inhibition of FXI activation by FXIIa does not reproduce this pattern [150].
A genetically engineered fusion protein (MR1007) containing anti-CD14 antibody (to block LPS receptor) and the modified second domain of bikunin (with anti-FXIa activity) improves survival in a rabbit model of sepsis without increasing spontaneous bleeding [151].
Inhibition of platelet functions in thrombus formation
Platelets are important immune cells, and thrombocytopenia is associated with an increased mortality in septic shock [152,153]. Few data support a benefit of previous aspirin treatment in community-onset pneumonia with [154] or without septic shock [155]. In a retrospective study of patients with septic shock, chronic antiplatelet treatment was not associated with reduced mortality [156]. There are no data to support introduction of antiplatelet therapy or to transfuse platelets in the absence of obvious thrombocytopenia with bleeding.
Inhibition of polyP
Targeting polyP is a new opportunity in the treatment of contact phase-induced thrombosis, including immunothrombosis, but some of them are toxic in vivo and cannot be used in humans (polymyxin B, polyethylenimine and polyamidoamine dendrimers) [157].
Universal heparin reversal agents (UHRAs)
UHRAs have been developed to reverse heparin effects but also displayed anticoagulant effects. UHRA-9 and UHRA-10 specifically inhibit polyP and prove antithrombotic effects without increasing bleeding in a mouse model of arterial thrombosis [158]. Nevertheless, these agents have not been used in experimental septic shock to date. Fig. 2 Natural history of coagulation during infection and potential therapeutics. The first step is "adaptive haemostasis" associated with the systemic inflammatory syndrome. Platelet count increases and fibrinogen production is dramatically increased (red curve). Thrombin generation is initiated with slight shortening of PT and aPTT (dark blue curve) resulting in fibrin monomers generation (green curve). Natural anticoagulants, antithrombin and protein C are decreased by consumption and downregulation (light blue curve). Inhibition of fibrinolysis by PAI-1 results in low D-dimers (yellow curve). Only low-dose heparin (unfractionated or low molecular weight) could be recommended to prevent thrombosis (inferior part of the graph). Reduction of anticoagulants and continuous thrombin generation results in prolonged clotting times (PT and aPTT) and platelet and fibrinogen consumption that remain in the high normal range. Fibrin monomers increased due to sustained fibrin formation and defective polymerisation by FXIIIa. D-dimers are moderately increased. This step can be called "thrombotic/multiple organ failure DIC" step and could be treated by natural anticoagulant infusion (antithrombin or soluble thrombomodulin) or fresh-frozen plasma. Later in the natural evolution of coagulation, consumption of all factors and platelets results in very low levels of fibrinogen, AT and PC, prolonged PT and aPTT and massive fibrinolysis with very high D-dimers. This "fibrinolytic DIC" step is characterised by oozing and massive bleeding, and supportive therapy associates fresh-frozen plasma and platelet transfusions, fibrinogen supply and tranexamic acid to prevent fibrinolysis
Phosphatases
Platelet-derived polyP are rapidly degraded by phosphatases. During septic shock, alkaline phosphatase activity is dramatically decreased and could enhance polyP activity. A recombinant human alkaline phosphatase (RecAP) is able to improve renal function due to acute kidney injury during septic shock [159][160][161]. Moreover, RecAP inhibits platelet activation ex vivo by converting ADP in adenosine and reverse hyperactivity of septic shock-derived platelets [162]. Effects on polyP were not specifically studied in this experimental study but cannot be excluded.
Dabrafenib
Dabrafenib is a B-Raf kinase inhibitor indicated in unresectable or metastatic melanoma with BRAF V600E mutation. This molecule has anti-inflammatory effects on polyP-mediated vascular disruption and cytokine production. In a mouse model of CLP-induced septic shock, administration of Dabrafenib 12 and 50 h after ligation improves survival [163].
Inhibition of NETs/histones Deoxyribonuclease 1 (DNase 1)
Deoxyribonuclease 1 or dornase alfa (Pulmozyme ® ) is an inhaled potent inhibitor of bacterial DNA used in patients with cystic fibrosis. Few experimental data are available regarding NETs. In a mouse model of thrombosis, DNase 1 infusion disassembles NETs and prevents thrombus formation [164]. Interestingly, in a CLP model of sepsis, DNase 1 delayed-but not early-infusion reduces organ failure and improves outcome [165]. More recently, DNase 1 infusion in mice challenged with LPS, E. coli or S. aureus reduces thrombin generation and platelet aggregation and improves microvascular perfusion [166] and survival [167].
Interferon-λ1/IL-29
IFN-λ1/IL-29 is a potent antiviral cytokine able to prevent NETs release induced by septic shock sera or platelet-derived polyP after phosphorylation of mammalian target of rapamycin (mTOR) to downregulate autophagy. Moreover, IFN-λ1/ IL-29 does not alter neutrophil viability and ROS production preserving phagocytosis. IFN-λ1/IL-29 has a strong antithrombotic activity in experimental arterial thrombosis but could also regulate immunohaemostasis [168].
Conclusion: evidence-based versus pragmatic medicine
Up to date, it is not possible to propose a unique strategy to diagnose and treat coagulation disorders during infection and septic shock. On the one hand, an "old view" considered activation of blood coagulation as one of the principal ways to die and thrombin as the principal suspect. This view was the rationale for anticoagulation during septic shock, with many experimental data supporting it. Nevertheless, all clinical trials-with the exception of PROW-ESS trial-failed to improve survival in unselected septic shock patients. On the other hand, recent experimental and clinical data support a beneficial role of blood coagulation to survive sepsis, including immunohaemostasis. The first step to improve patients' care is to stratify the "coagulopathy". A combination of biological tests must be used daily, eventually combined in scores. We believe that JAAM 2006 and JAAM-DIC scores, taking into account the inflammatory syndrome and evolution, are the most appropriate. New markers of cell activation may be of interest. The second step is the choice of therapeutic intervention. Treatment of both infection and shock without delay is mandatory. Then, anticoagulation may be considered. To date, no recommendation can be made according to international guidelines with a high level of proof. Nevertheless, three different patterns could be recognised (Fig. 2): i. Absence of obvious coagulopathy with high platelet count, low D-dimers, subnormal PT and AT requiring only prevention of thrombosis by unfractionated or low molecular weight heparins. ii. Thrombotic/multiple organ failure coagulopathy (also referred as thrombotic DIC) with "low normal" platelet count, prolonged PT, decreased AT and mild to moderate D-dimers level; clinical presentation may combine organ failure and cutaneous signs like symmetric limb gangrene with pulses and retiform purpura. Antithrombin and recombinant soluble thrombomodulin must be considered. New treatments targeting FXIIa, FXIa, polyP and NETs preventing thrombosis are in development and improve survival in experimental sepsis or septic shock. They have not yet been tested in humans. iii. Haemorrhagic/fibrinolytic coagulopathy with very low platelets, fibrinogen and AT, prolonged coagulation times and clinical oozing. Massive transfusion of freshfrozen plasma, platelets and fibrinogen is required, with antifibrinolytic drugs.
New clinical trials are necessary to support this view and to improve patients' care.
Additional files
Additional file 1. Supplementary data.
Additional file 4: Table S2. Efficacy of anticoagulants in septic shock.
Additional file 5: Table S3. Effect of antithrombin in pneumoniainduced septic shock with DIC (observational nationwide study) 40 .
Additional file 6: Figure S2. Timing of anticoagulant therapy. Authors' contributions XD was the primary author responsible for literature search and review. XD, JH and FM were involved in the generation of the first version of the manuscript and then in critical revision, editing and generation of revised manuscript. All authors read and approved the final manuscript. | 2017-12-07T00:18:32.977Z | 2017-12-02T00:00:00.000 | {
"year": 2017,
"sha1": "ff8b567312e79328cba80fcfa5c9b77ac5800dfb",
"oa_license": "CCBY",
"oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-017-0339-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff8b567312e79328cba80fcfa5c9b77ac5800dfb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253356104 | pes2o/s2orc | v3-fos-license | Metallic Nanoparticles Adsorbed at the Pore Surface of Polymers with Various Porous Morphologies: Toward Hybrid Materials Meant for Heterogeneous Supported Catalysis
Hybrid materials consisting of metallic nanoparticles (NPs) adsorbed on porous polymeric supports have been the subject of intense research for many years. Such materials indeed gain from intrinsic properties, e.g., high specific surface area, catalytic properties, porous features, etc., of both components. Rational design of such materials is fundamental regarding the functionalization of the support surface and thus the interactions required for the metallic NPs to be strongly immobilized at the pore surface. Herein are presented some significant scientific contributions to this rapidly expanding research field. This contribution will notably focus on various examples of such hybrid systems prepared from porous polymers, whatever the morphology and size of the pores. Such porous polymeric supports can display pores with sizes ranging from a few nanometers to hundreds of microns while pore morphologies, such as spherical, tubular, etc., and/or open or closed, can be obtained. These systems have allowed some catalytic molecular reactions to be successfully undertaken, such as the reduction of nitroaromatic compounds or dyes, e.g., methylene blue and Eosin Y, boronic acid-based C–C homocoupling reactions, but also cascade reactions consisting of two catalytic reactions achieved in a row.
Introduction
Metallic nanoparticles (NPs) have been the subject of intense research since their discovery. Over past decades, this research field has known a tremendous interest, mainly because of its powerful potential in diverse applications. NPs are described as objects with dimensions in the 1-100 nm range [1]. They can be prepared from inorganic [2,3] or organic matter [4], depending on the targeted applications. Metallic NPs have unique intrinsic properties that make them appealing to scientists. They possess, indeed, a great surface to volume ratio that make them suitable for applications ranging from photonics [5] to heterogeneous catalysis [6][7][8].
A catalyst, as defined by the International Union of Pure and Applied Chemistry (IUPAC), consists of a chemical substance that increases the rate of a chemical reaction, although without modifying its overall standard Gibbs energy ∆G. Thus, a catalyst is a chemical substance that it is necessary to get rid of at the end of the chemical reaction so as to recover the crude product(s), as it is regenerated during the reaction [9]. Two types of catalysts exist depending on whether or not they are in the same phase as the reactants, i.e., homo-or heterogeneous catalysts. In the last decade, a non-negligible number of scientific studies in the field have reported on catalyzed molecular reactions involving metallic NPs, either in the gas or the liquid phase. In such a case, a major drawback the functional monomer and of the cross-linker. Moreover, the pore size of those materials can be finely tuned by playing with the nature of the porogenic agent [29]. Among other porogenic agents to generate pores can be found solid particles [30,31] and linear polymers [32,33]. Alternatively, emulsion templating [34,35] and sometimes a combination of some of the above-mentioned porogens can be implemented, leading to at least two porosity levels [36][37][38][39]. (Table 1) [40]. Starch-based porous materials were prepared by solvent exchange between water and ethanol. Palladium (Pd) NPs were then adsorbed at the pore surface of the resulting supports. Experimentally, the porous polymer, immersed in a solution of palladium acetate, allowed for the reduction of the metal ions, thus leading to immobilization of the corresponding NPs at the pore surface. It is interesting to notice that the nanoparticle size distribution was controllable by selection of the preparation solvent. Such hybrid materials exhibited a specific surface area of 190 m 2 ·g −1 and an average pore size of 8.2 nm through the Brunauer-Emmett-Teller (BET) as well as the Barrett-Joyner-Halenda (BJH) methods, respectively, as characterized by N 2 physisorption. Various Pd NP-mediated and microwave-assisted C-C couplings were performed with those hybrid materials, such as the Mizoroki-Heck, Sonogashira, and Suzuki-Miyaura reactions. The authors demonstrated that microwave activation allowed for reducing the reaction time as low as 10 min while data previously reported on such catalyzed reactions, not performed under microwave irradiation, showed reaction times in the 4-12 h range. However, the benefit of such starch-based over silica-supported Pd NPs, even if expressed by the authors of the study, is very difficult to claim as no microwave-assisted catalytic reaction is reported in the literature with silica matrices. [67] a not defined in the article.
Nanoporous polymeric supports were produced by Zhang et al. through a direct Sonogashira coupling between 1,3,5-triethynylbenzene and 1,4-dibromobenzene, as depicted in Figure 1 and Table 1 [41]. Thorough characterization of the porosity was investigated, notably using N 2 sorption. The BET method gave a specific surface area of 421 m 2 ·g −1 and a pore volume of 0.27 mL·g −1 . For this material, calculations achieved by the nonlocal density functional theory (NLDFT), a computational quantum mechanical modelling, allowed the authors to highlight the presence of three populations of pores with sizes centered on 0.6, 1.3, and 3.1 nm. The immersion of the as-obtained porous polymers in a palladium diacetate solution in acetone under stirring at 90 • C permitted the formation of the corresponding Pd NPs. The hybrid materials permitted to achieve different Suzuki-Miyaura C-C couplings with a large variety of halogenoarenes, especially iodo-and bromo-, along with phenylboronic acid in high yields (>85%), and rather short reaction times (less than 4 h). The authors highlighted that such hybrid materials allowed for a three-fold decrease in the reaction times when compared to more conventional Pd/C catalysts (from 9 h for Pd/C to 3 h) to perform the conversion of the reactants with similar reaction yields. Those hybrid materials were recycled up to five times. No significant reduction of the catalytic activity was observed, while a leaching effect was quantified to be as low as 1%.
Nitrogen rich porous polymers have been more recently developed by Zhang et al. through a nucleophilic substitution of chlorines pending on the cyameluric chloride monomer by amines of piperazine, as shown in Figure 2 [42,43]. Such heptazine-based porous scaffolds were devoted to CO 2 adsorption but could also find application in heterogeneous supported catalysis ( Table 1). To that purpose, further adsorption of Pd NPs was performed by immersion of the resulting heptazine networks in an acetone solution containing palladium acetate under reflux, thus allowing for the self-reduction of the metal cations. Porous features of the scaffolds were determined through BET measurements using nitrogen sorption. Pore size distribution was found to be in the 2-8 nm range. Pore volumes of 0.43 and 0.33 mL·g −1 and surface areas of 106 and 73 m 2 ·g −1 were found by the authors for the materials before and after immobilization of Pd NPs, respectively. These rather low values, when considering hybrid materials in which NPs should develop high surface area, were unexpected. However, Suzuki-Miyaura C-C couplings were successfully achieved with such supported catalysts in good-to-excellent yields (generally above 80%), except for the 2-bromonaphtalene/arylboronic acid and the bromobenzene/4-nitrobenzene boronic pairs, for which yields remained low (below 40%). Again, a large number of bromoarene derivative/phenylboronic acid pairs were assessed by the authors to prove the versatility of these catalytic supports. The authors claimed that the rather low yields observed for the two above-mentioned starting halogenated compound/boronic acid derivative pairs relied on the large steric hindrance of 2-bromonaphtalene as well as on the poor solubility of 4-nitrobenzene boronic acid. Recyclable character of the hybrid material was also assessed by performing five consecutive cycles. Only a limited decrease in the catalytic activity was observed and ICP measurements performed before and after five cycles only showed a slight negligible leaching phenomenon of Pd NPs. rather short reaction times (less than 4 h). The authors highlighted that such hybrid materials allowed for a three-fold decrease in the reaction times when compared to more conventional Pd/C catalysts (from 9 h for Pd/C to 3 h) to perform the conversion of the reactants with similar reaction yields. Those hybrid materials were recycled up to five times. No significant reduction of the catalytic activity was observed, while a leaching effect was quantified to be as low as 1%. Nitrogen rich porous polymers have been more recently developed by Zhang et al. through a nucleophilic substitution of chlorines pending on the cyameluric chloride monomer by amines of piperazine, as shown in Figure 2 [42,43]. Such heptazine-based porous scaffolds were devoted to CO2 adsorption but could also find application in heterogeneous supported catalysis ( Table 1). To that purpose, further adsorption of Pd NPs was performed by immersion of the resulting heptazine networks in an acetone solution containing palladium acetate under reflux, thus allowing for the self-reduction of the metal cations. Porous features of the scaffolds were determined through BET measurements using nitrogen sorption. Pore size distribution was found to be in the 2-8 nm range. Pore volumes of 0.43 and 0.33 mL·g −1 and surface areas of 106 and 73 m 2 ·g −1 were found by the authors for the materials before and after immobilization of Pd NPs, respectively. These rather low values, when considering hybrid materials in which NPs should develop high surface area, were unexpected. However, Suzuki-Miyaura C-C couplings were successfully achieved with such supported catalysts in good-to-excellent yields (generally above 80%), except for the 2-bromonaphtalene/arylboronic acid and the bromobenzene/4-nitrobenzene boronic pairs, for which yields remained low (below 40%). Again, a large number of bromoarene derivative/phenylboronic acid pairs were assessed by the authors to prove the versatility of these catalytic supports. The authors claimed that the rather low yields observed for the two above-mentioned starting halogenated compound/boronic acid derivative pairs relied on the large steric hindrance of 2bromonaphtalene as well as on the poor solubility of 4-nitrobenzene boronic acid. Recyclable character of the hybrid material was also assessed by performing five consecutive cycles. Only a limited decrease in the catalytic activity was observed and ICP measurements performed before and after five cycles only showed a slight negligible leaching phenomenon of Pd NPs. An ingenious strategy was developed by Poupart et al. to design and prepare thiol functionalized polymeric monoliths [44]. In this work, a disulfide containing dimethacrylate-based crosslinker, i.e., bis(2-methacryloyl)oxyethyl disulfide (DSDMA), was prepared from 2-hydroxyethyl disulfide by esterification with methacryloyl chloride. The resulting crosslinker was copolymerized with ethylene glycol dimethacrylate in the presence of a porogenic solvent and a radical initiator through a photo-triggered process in a UV oven. Upon solvent removal, the observation of the porous monolith by scanning electron microscopy (SEM) highlighted the presence of an interconnected globular structure typical of the syneresis phenomenon taking place during the copolymerization in the presence of a solvent. Mercury intrusion porosimetry (MIP) confirmed this finding. Interestingly, a variation of the average pore size of the materials was observed depending on the nature of the porogenic solvent. The more polar the solvent, the larger the pore An ingenious strategy was developed by Poupart et al. to design and prepare thiol functionalized polymeric monoliths [44]. In this work, a disulfide containing dimethacrylatebased crosslinker, i.e., bis(2-methacryloyl)oxyethyl disulfide (DSDMA), was prepared from 2-hydroxyethyl disulfide by esterification with methacryloyl chloride. The resulting crosslinker was copolymerized with ethylene glycol dimethacrylate in the presence of a porogenic solvent and a radical initiator through a photo-triggered process in a UV oven. Upon solvent removal, the observation of the porous monolith by scanning electron microscopy (SEM) highlighted the presence of an interconnected globular structure typical of the syneresis phenomenon taking place during the copolymerization in the presence of a solvent. Mercury intrusion porosimetry (MIP) confirmed this finding. Interestingly, a variation of the average pore size of the materials was observed depending on the nature of the porogenic solvent. The more polar the solvent, the larger the pore size. The presence of disulfide functions within the chemical structure of the monolith allowed for the release of thiol functions through chemical reduction in the presence of D,L-dithiothreitol (DTT). The presence of such thiol functions was highlighted by Raman spectroscopy, with the appearance of a characteristic Raman shift at 2500 cm −1 . Finally, the presence of these thiol functions at the pore surface of the monoliths was exploited to immobilize in situ generated gold (Au) NPs through the formation of pseudo-covalent sulfur-gold bonds ( Table 1). Investigations regarding the catalytic behavior of those hybrid materials was driven by following the reduction of a pollutant dye used in the textile industry, namely Eosin Y, in the presence of the hybrid material by UV-visible spectroscopy. It was also observed that the catalytic efficiency remained stable at 60% after~10 min reaction even though a decrease was observed between the two first cycles, likely due to leakage of non-specifically adsorbed Au NPs.
Biporous Bulk Monoliths
Biporous materials have gained some particular interest from the research community, likely due to their intrinsic properties, notably in terms of permeability, porosity, and surface area, which allowed them to be used in diverse areas, including civil engineering, tissue engineering or drug delivery. Indeed, biporous materials can benefit from each porosity level: (i) the first macroporous level offers large pores ranging from a few to hundreds of µm, providing enhanced permeability for the liquid to penetrate into the pores, but a poor specific surface area and (ii) the second level, constituted of pores having dimensions generally below 1 µm, affords a larger specific surface area, but a lower accessibility to the pores. It is good to notice that considering supported catalytic applications necessitates a high permeability regarding accessibility of the reactants to the catalytic sites, while a large specific surface area should also be envisioned to favor higher density of metal NPs on the support surface. Thus, gathering these two porosity levels in the same material may provide more efficient catalytic systems. Based on this simple consideration, material scientists are now able to design and synthesize porous polymers possessing at least two porosity levels in a precise manner, notably by independently controlling each porosity level. The preparation of biporous monoliths generally requires the combination of two porogenic agents, and different methodologies have hitherto been developed to prepare such biporous materials that encompass gas foaming, [68] temperature-induced phase separation (TIPS) [69], 3D printing [70], the double porogen templating approach, the polyHIPE technique [71], but also electrospun (co)polymer (mixture) solutions.
Electrospinning has been used up until now for the preparation of materials for environmental catalytic applications. Such electrospun materials notably allowed for the reduction of nitro-containing compounds and the treatment of hexavalent chromium (Cr VI ) [72]. Nowadays, the implementation of such electrospun (co)polymers as catalytic supports is well-documented in the literature regarding the use of polymer mats as precursors for calcination for creating inorganic structures. On the other hand, polymer fibers possessing chelating groups, carboxylic acids, or amines, e.g., are also reported in many scientific publications of the field. Thus, Huang et al. proposed the electrospinning of a blend of polyethyleneimine (PEI) and poly(vinyl alcohol) (PVA). The resulting mats were used as supports for Au [26] and Pd [72] NPs adsorption. The corresponding Au NPand Pd NP-based hybrid materials were successfully applied for the catalytic reduction of nitroaromatic compounds and highly carcinogenic Cr VI to Cr III , respectively. In a similar fashion, Xiao et al. electrospun a blend of poly(acrylic acid) (PAA) and PVA. The carboxylic acid groups arising from PAA chains allowed for the chelation of sodium borohydridemediated in situ generated Ag NPs that were successfully used for the catalytic reduction of p-nitrophenol [73]. Pandey et al. reported on the electrospinning of poly(ether sulfone) (PES) to prepare polymer fibers. The authors took advantage of the presence of ether sulfone moieties to initiate the growth of poly(glycidyl methacrylate) (PGMA) chains through photolysis under UV irradiation [74]. PGMA chains present at the pore surface display pending oxirane groups that were opened in the presence of hydrazine, thus allowing for the direct attachment of the reducing agents. The final step of the hybrid preparation necessitated the immersion of a palladium salt solution (PdCl 2 ) with the copolymer fibers, Pd 2+ cations self-reducing in contact with immobilized hydrazine molecules. The resulting hybrid fibers were implemented for the reduction of toxic hexavalent chromium (Cr VI ) as well as p-nitrophenol but also for the less common reduction of hexavalent (U VI ) to tetravalent uranium (U IV ).
The double porogen templating strategy, relying on the use of two distinct and independent porogenic agents, has been used notably by Ly et al. [45,46] to prepare different types of biporous polymeric monoliths, as highlighted in Figure 3. In this work, 2-hydroxyethylmethacrylate (HEMA) and ethylene glycol dimethacrylate (EGDMA) were used as the functional monomer and crosslinker, respectively, and were polymerized under UV irradiation at 365 nm in the presence of a free-radical initiator, namely AIBN. A porogenic solvent was used to generate the lower porosity level, while NaCl particles (125-200 µm) served as templates to generate the second porosity level. First, the influence of different experimental parameters was investigated regarding the porosity features of the resulting monoporous materials presenting the lower porosity level. Thus, the nature of the porogenic solvent, but also its volume ratio (with respect to the total comonomers amount), and the crosslinker to functional monomer molar ratio, were finely tuned. Thus, the more polar the solvent, the larger the pore size. Similarly, high porogenic solvent volume ratios led to larger pores and vice versa. Alternatively, poly(methyl methacrylate) beads were used to generate the macroporosity level. Upon photo-triggered copolymerization and subsequent removal of the porogenic solvent and the macroparticles, the available hydroxyl functions of the as-obtained materials were first activated with carbonyldiimidazole (CDI) and then functionalized with different amines, e.g., cysteamine, ethylenediamine, allylamine, and propargylamine. The functionalization with allylamine and propargylamine notably allowed for further modification of the pore surface through UV-mediated thiol-ene and thiol-yne "click" chemistry, respectively, using cysteamine or thioglycolic acid as model thiols. Raman spectroscopy was used as the technique of choice to follow the functionalization steps. Au NP adsorption through the in situ strategy consisted of the impregnation of the materials with an aqueous gold tetrachloroaurate (HAuCl 4 ) solution, followed by the subsequent hydride-mediated reduction of Au 3+ ions in the presence of NaBH 4 (Table 1). Thiol-functionalized porous HEMA-based materials led to the adsorption of Au NPs with greater sizes, but more surprisingly, that also led to the highest particle leaching. On the opposite, amine functions at the pore surface of the materials led to the formation of smaller gold particles ( Figure 4A) but also to better dispersed metallic NPs. This notably allowed an easier reuse of the hybrid materials for the conversion of p-nitrophenol into the corresponding amine in the presence of a hydride source, i.e., NaBH 4 . The catalytic reaction was very fast and easy to follow, as the solution was initially yellow due to the π→π* electron transition of the p-nitrophenolate ion, and it became colorless after reduction of the nitro moiety. Further, the efficiency of the biporous materials, which also have a higher specific surface area than their monoporous counterparts, was assessed. Interestingly, biporous systems showed a significantly higher catalytic efficiency than their monoporous counterparts containing either the upper porosity level or the lower porosity level. The authors claimed that this is likely due to a higher specific surface area of the doubly porous monoliths when compared to their monoporous analogues displaying the upper porosity. They also pointed out the higher accessibility of the catalyst in the doubly porous materials when compared to monoliths with only the lower porosity level. The authors finally also demonstrated that this type of hybrid catalyst can be used for the reduction of Eosin Y, paving the way toward the use of such materials for industrial wastewater depollution. that this is likely due to a higher specific surface area of the doubly porous monolit when compared to their monoporous analogues displaying the upper porosity. They al pointed out the higher accessibility of the catalyst in the doubly porous materials wh compared to monoliths with only the lower porosity level. The authors finally al demonstrated that this type of hybrid catalyst can be used for the reduction of Eosin paving the way toward the use of such materials for industrial wastewater depollution Doubly porous materials have also been prepared from the emulsion templati approach. High internal phase emulsion has been used for a long time to prepare su area than their monoporous counterparts, was assessed. Interestingly, biporous systems showed a significantly higher catalytic efficiency than their monoporous counterparts containing either the upper porosity level or the lower porosity level. The authors claimed that this is likely due to a higher specific surface area of the doubly porous monoliths when compared to their monoporous analogues displaying the upper porosity. They also pointed out the higher accessibility of the catalyst in the doubly porous materials when compared to monoliths with only the lower porosity level. The authors finally also demonstrated that this type of hybrid catalyst can be used for the reduction of Eosin Y, paving the way toward the use of such materials for industrial wastewater depollution. Doubly porous materials have also been prepared from the emulsion templating approach. High internal phase emulsion has been used for a long time to prepare such Doubly porous materials have also been prepared from the emulsion templating approach. High internal phase emulsion has been used for a long time to prepare such biporous polymeric materials in which the higher porosity level arises from the droplets of the dispersed phase and the lowest porosity level from interconnections between adjacent droplets [71]. Deleuze et al. pioneered the use of functional polyHIPEs as candidates of choice for supporting metal nanoparticle-based catalysts. In 2005, they reported on the design and synthesis of cross-linked poly(styrene-co-vinylbenzylchloride)-(P(S-co-VBC)based [28,47] polyHIPEs as supports for in situ generated Pd NPs. Nitrogen sorption porosimetry (BET method) demonstrated that both PS and PVBC polymeric supports show a specific surface area of ca. 900 m 2 . g −1 . At the same time, the presence of a porogenic solvent added to the HIPE polymerization feed induced a pore size distribution in the 10-80 nm range. The resulting hybrid supports, obtained by reduction of the precursor palladium salt, were used for the hydrogenation of an alkene, namely allyl alcohol [47], and for Suzuki-Miyaura cross-coupling [28] reactions (Table 1). Reaction times of 1 h and 70 h were reported for near-completion hydrogenation and coupling reactions, respectively. The authors demonstrated that PS-based catalytic supports offered good activity, even compared to commercial Pd/C, and also a satisfying reusability regarding alkene hydrogenation. Regarding the catalysis of the Suzuki-Miyaura coupling reaction, the catalytic activities of these PVBC-based hybrid materials were found to be close to those obtained with their homogeneous counterpart. Even one system showed a better activity than the well-known Pd/C powder. Finally, Suzuki-Miyaura carbon-carbon coupling reactions using these hybrid supports were successfully achieved with a wide range of substrates, demonstrating the versatility of the as-prepared materials. Some of these PS-based polyHIPEs were also implemented by the same group as supports for Au NPs [49]. To this purpose, PS-based materials were simply immerged in HAuCl 4 solution and Au 3+ cations were self-reduced through PS induction. Pore size in the 200-291 µm range was obtained for such materials depending on the samples, while a porosity ratio of 82% was found by mercury intrusion porosimetry. As seen in Table 1, supported Au NPs allowed for the successful and recyclable reduction of a dye, Eosin Y, under mild conditions (25 • C) within short reaction times (1 h). Recently, it was demonstrated that the synthesis of high specific surface area biporous polymers can be achieved from a reversed oil-in-water high internal phase emulsion. To this purpose, HEMA and N,N'-methylenebisacrylamide (MBA) in water and cyclohexane were emulsified in the presence of Pluronic ® F68 as surfactant to stabilize the concentrated emulsion [48]. The polymerization of the water continuous phase was triggered by addition of ammonium persulfate (APS) and N,N,N',N'-tetramethylethylenediamine (TEMED). Upon polymer etching, biporous polymers presenting large pores arising from the oil droplets and smaller pores (voids) originating from the interconnections between adjacent oil droplets were obtained. A hyper-crosslinking procedure was investigated to increase the specific surface area of the polymers but also to functionalize the pore surface. Experimentally, after a two-step modification involving carbonyl diimidazole (CDI) activation followed by allylamine or propargylamine functionalization, di-and tetra-thiols were tethered to the pore surface of the biporous polymers, allowing for the hypercrosslinking of the materials, thus leading the surface to be covered with thioether moieties but also free thiols. Specific surface area, as determined by the BET method, of up to 1500 m 2 ·g −1 could be obtained. The remaining free thiols were used to generate in situ Au NPs through impregnation with gold salts and subsequent NaBH 4 -mediated reduction ( Figure 4B). Such materials were used to successfully catalyze the reduction of pollutant compounds, such as 4-nitrophenol but also Eosin Y, a dye used in the textile industry (Table 1).
Finely Divided Bulk Materials as Supports for NPs
Crosslinked polymeric materials are indeed successful candidates for supporting metallic NPs. However, some polymer powders can also be used. Linear polymers, if precipitated, give such functional materials. Polystyrene, being easily functionalizable through Friedel-Crafts reactions, is naturally interesting [75,76].
Amari et al. have described a methodology to modify linear polystyrenes (PS) and precipitate them into powder so as to support various metallic NPs. Experimentally, linear PS were submitted to nitration and subsequent reduction of the intermediate nitrocompound to afford amino-functionalized polystyrenes. Amino groups were then modified using either a chlorine-bearing triazine [50] or methyl acrylate [51]. Regarding the triazine modification, 2-aminothiazole was further added to the modified PS, so as to chelate gold ions, which were further reduced into their respective NPs through the use of a reducing chemical. The resulting Au NPs supported onto PS materials have been further used to reduce completely nitroaromatic compounds, i.e., 4-nitrophenol (93-96% reduced in 9 min for up to 5 cycles) as well as trifluralin (95% reduced in 15 min), an herbicide (Table 1). On the other hand, the acrylate-modified poly(aminostyrene), was submitted to amidation reaction in the presence of ethylenediamine. The resulting polymer has been used as a convenient support for the immobilization of silver NPs in situ generated through reduction of silver nitrate ( Figure 5A). Implementation of the resulting polymer-adsorbed Ag NPs for the reduction of a pollutant dye, i.e., methylene blue, was assessed (Table 1). Such a supported catalytic reaction, monitored by UV-spectroscopy, was repeated over five consecutive cycles, with only a little decrease in the reduction yield from 97% to 91%. One should note the tendency of some metallic nanoparticles to oxidize in air, such as Ag and Cu NPs, for instance. This could lead to some loss of catalytic activity of the resulting hybrid materials, even though it was not clearly mentioned in these investigations.
resulting polymer-adsorbed Ag NPs for the reduction of a pollutant dye, i.e., methyl blue, was assessed (Table 1). Such a supported catalytic reaction, monitored by U spectroscopy, was repeated over five consecutive cycles, with only a little decrease in reduction yield from 97% to 91%. One should note the tendency of some meta nanoparticles to oxidize in air, such as Ag and Cu NPs, for instance. This could lead some loss of catalytic activity of the resulting hybrid materials, even though it was clearly mentioned in these investigations. Recent studies from Yahya et al. reported on the implementation of a green a sustainable polymeric support prepared by refining rice straw to ionic nanocellul Schiff base (NCESB). To this purpose, refining of the straw was processed by success dewaxing, swelling, de-pulping, bleaching, and acidic hydrolysis steps, as shown Figure 6 [52]. The as-obtained nanocellulose (NCE) was then treated carbamoylmethylation to afford the resulting carbamate functionalized NCE that w modified with a vanillin derivative presenting an imidazolium ionic liquid motif to g the corresponding Schiff base-containing nanocellulose. In a final synthetic step, hybrid material was prepared by immersion of the NCESB with a palladium acetate solution that allowed for the bio-reduction of the latter. TEM was used to verify presence of well-dispersed Pd NPS presenting rather narrow pore-size distribution (5 nm) in the porosity of the polymer support. This green hybrid NCESB-Pd nanocata was assessed regarding the Suzuki reaction using a wide range of halobenzenes a phenylboronic acid. This new catalyst exhibited amazing activity in such coupl reactions at 50 °C in short reaction times (15-60 min) and even at room temperature longer reaction times of ca. 120 min. Finally, the recyclability of the catalyst was confirm Recent studies from Yahya et al. reported on the implementation of a green and sustainable polymeric support prepared by refining rice straw to ionic nanocellulose Schiff base (NCESB). To this purpose, refining of the straw was processed by successive dewaxing, swelling, de-pulping, bleaching, and acidic hydrolysis steps, as shown in Figure 6 [52]. The as-obtained nanocellulose (NCE) was then treated by carbamoylmethylation to afford the resulting carbamate functionalized NCE that was modified with a vanillin derivative presenting an imidazolium ionic liquid motif to give the corresponding Schiff base-containing nanocellulose. In a final synthetic step, the hybrid material was prepared by immersion of the NCESB with a palladium acetate salt solution that allowed for the bio-reduction of the latter. TEM was used to verify the presence of well-dispersed Pd NPS presenting rather narrow pore-size distribution (5-23 nm) in the porosity of the polymer support. This green hybrid NCESB-Pd nanocatalyst was assessed regarding the Suzuki reaction using a wide range of halobenzenes and phenylboronic acid. This new catalyst exhibited amazing activity in such coupling reactions at 50 • C in short reaction times (15-60 min) and even at room temperature for longer reaction times of ca. 120 min. Finally, the recyclability of the catalyst was confirmed by monitoring the activity of the NCESB-Pd nanocatalyst on the Suzuki coupling reaction after five consecutive runs (Table 1). No significant loss of catalytic activity was observed after five cycles as 88% yield of the desired product was obtained. Different characterization techniques were used to demonstrate that the nanocatalyst was not altered in its structural and morphological nature (FTIR, XRD) while ICP-AES showed that no significant leaching (<1%) of Pd NPs was observed after these five runs, thus demonstrating the robust anchoring of the Pd NPs at the pore surface of the NCESB through carboxy, azomethine, hydroxy, and methoxy groups. The authors claimed that the new ionic nanocatalyst may pave the way for a novel generation of ionic low-cost green and highly effective nanocatalysts for organic transformation reactions.
Hybrid catalyst can also be prepared from nitrogen-based ligands [53]. Targhan et al. proposed the esterification of itaconic acid with hydroxyl-functionalized terpyridine ligand, namely 4 -(4-hydroxyphenyl)-2,2 :6 ,2 -terpyridine (HPTPy), to prepare the corresponding functional monomer. The resulting bis-terpyridine functionalized ligand was then copolymerized with trimethylolpropane triacrylate (TMPTA) in the presence of MeOH/CH 3 CN (40:60) as the porogenic solvent mixture to afford the corresponding terpyridine functionalized cross-linked porous polymer. The porous morphology of the copolymer was investigated by N 2 sorption porosimetry through the BET method. Mean pore diameter of 5 nm and surface area of 21 m 2 ·g −1 were observed for this material. Upon refluxing a PdCl 2 solution in EtOH with the porous polymer for 24 h, the resulting hybrid material could be obtained after purification by successive washings with EtOH. Pd NPs were characterized by XRD and TEM, notably demonstrating a disperse coverage of the material pore surface. The hybrid material was applied as a highly effective recyclable catalyst in Suzuki-Miyaura and Mizoroki-Heck coupling reactions (Table 1), allowing for high yield conversion (92 to 98%) with a large diversity of starting reagents. The reactions were thus investigated under low Pd-loading conditions and straightforward methods. The corresponding products were obtained with excellent yields (up to 98%) and high catalytic activities (TOF up to 213 h −1 ). The authors demonstrated that it is possible to separate the supported catalyst from the reaction mixture by simple centrifugation and that it could be reused for six consecutive runs with only a slight reduction in catalytic activity. by monitoring the activity of the NCESB-Pd nanocatalyst on the Suzuki coupling reaction after five consecutive runs (Table 1). No significant loss of catalytic activity was observed after five cycles as 88% yield of the desired product was obtained. Different characterization techniques were used to demonstrate that the nanocatalyst was not altered in its structural and morphological nature (FTIR, XRD) while ICP-AES showed that no significant leaching (<1%) of Pd NPs was observed after these five runs, thus demonstrating the robust anchoring of the Pd NPs at the pore surface of the NCESB through carboxy, azomethine, hydroxy, and methoxy groups. The authors claimed that the new ionic nanocatalyst may pave the way for a novel generation of ionic low-cost green and highly effective nanocatalysts for organic transformation reactions. Figure 6. Synthetic procedure adopted for the preparation of supported Pd nanocatalysts from rice straw. [52] Hybrid catalyst can also be prepared from nitrogen-based ligands [53]. Targhan et al. proposed the esterification of itaconic acid with hydroxyl-functionalized terpyridine ligand, namely 4′-(4-hydroxyphenyl)-2,2′:6′,2″-terpyridine (HPTPy), to prepare the corresponding functional monomer. The resulting bis-terpyridine functionalized ligand was then copolymerized with trimethylolpropane triacrylate (TMPTA) in the presence of MeOH/CH3CN (40:60) as the porogenic solvent mixture to afford the corresponding terpyridine functionalized cross-linked porous polymer. The porous morphology of the copolymer was investigated by N2 sorption porosimetry through the BET method. Mean pore diameter of 5 nm and surface area of 21 m 2 ·g −1 were observed for this material. Upon refluxing a PdCl2 solution in EtOH with the porous polymer for 24 h, the resulting hybrid material could be obtained after purification by successive washings with EtOH. Pd NPs were characterized by XRD and TEM, notably demonstrating a disperse coverage of the material pore surface. The hybrid material was applied as a highly effective recyclable catalyst in Suzuki-Miyaura and Mizoroki-Heck coupling reactions (Table 1), allowing for high yield conversion (92 to 98%) with a large diversity of starting reagents. The reactions were thus investigated under low Pd-loading conditions and straightforward methods. The corresponding products were obtained with excellent yields (up to 98%) and high catalytic activities (TOF up to 213 h −1 ). The authors demonstrated that it is possible to separate the supported catalyst from the reaction mixture by simple centrifugation and that it could be reused for six consecutive runs with only a slight reduction in catalytic activity.
Nanoporous Polymer Films as Supports for NPs
Oriented nanoporous polymer-based thin films can also be used as interesting supports for the adsorption of metallic NPs. Thus, pore sizes of nanoporous materials are appealing for catalysis applications, as they can provide a filtration phenomenon occurring simultaneously to the catalytic activity. Such materials can be obtained from well-defined di-but also triblock copolymers containing immiscible homopolymer segments. Indeed, depending on three different parameters, i.e., the Flory-Huggins
Nanoporous Polymer Films as Supports for NPs
Oriented nanoporous polymer-based thin films can also be used as interesting supports for the adsorption of metallic NPs. Thus, pore sizes of nanoporous materials are appealing for catalysis applications, as they can provide a filtration phenomenon occurring simultaneously to the catalytic activity. Such materials can be obtained from well-defined di-but also triblock copolymers containing immiscible homopolymer segments. Indeed, depending on three different parameters, i.e., the Flory-Huggins interaction parameter between both blocks (χ AB ), N the number of repetition units, and f the volume fraction of the minority block; such AB diblock copolymers can adopt different morphologies after macroscopic orientation, including body-centered spheres, hexagonal cylinders, bicontinuous gyroids, and alternating lamellae. For instance, nanoporous polystyrene could be obtained from polystyrene-block-poly(D,L-lactic acid) (PS-b-PLA) [77,78], polystyrene-block-poly(ethylene oxide) (PS-b-PEO) [79], or polystyrene-block-poly(methyl methacrylate) (PS-b-PMMA) [80], for instance. Removing the sacrificial block could be achieved by selective chemical degradation of the minority block, but this generally required harsh experimental conditions, e.g., alkaline or acidic media, strong UV irradiation, etc. Another alternative and somehow smarter path for removing the sacrificial block lies in the selective cleavage of a chemical moiety conveniently positioned at the junction between both blocks, as depicted in Figure 7. This notably allows for using milder and more environmental-friendly experimental conditions but also positioning at the pore surface chemical functionalities of interest so as to envision further post-polymerization modification. Thus, different studies based on this synthetic strategy have emerged in the literature. Some of them notably mentioned the possibility to use photocleavable anthracene dimer-, acetal- [81], boronate [82], disulfide-, or ortho-nitro ester-based chemical junctions between the remaining and the sacrificial block [83]. This was reported by Ryu et al., who prepared oriented thiol-functionalized nanoporous thin films from a disulfide containing diblock copolymer so as to generate gold cylinders after Au NP adsorption at the pore surface [84]. Unfortunately, no application was described in this work. emerged in the literature. Some of them notably mentioned the possibility to photocleavable anthracene dimer-, acetal- [81], boronate [82], disulfide-, or orthoester-based chemical junctions between the remaining and the sacrificial block [83]. was reported by Ryu et al., who prepared oriented thiol-functionalized nanoporous films from a disulfide containing diblock copolymer so as to generate gold cylinders Au NP adsorption at the pore surface [84]. Unfortunately, no application was descr in this work. Only a few of these systems have been reported in the literature describing th of nanoporous polymers arising from diblock copolymers for heterogeneous suppo catalysis. Our group has reported on the synthesis of diblock PS-b-PLA copolymers a heterobifunctional initiator containing a disulfide bridge through the controlled ra polymerization of styrene via atom transfer radical polymerization (ATRP) subsequent ring-opening polymerization (ROP) of D,L-lactide [54]. After the succe generation of both PS and PLA blocks, the resulting diblock copolymers have submitted to a mechanical orientation through a channel die processing. The nanopor was generated via the selective cleavage of the disulfide bridge present between blocks in the presence of triphenylphosphine (TPP), a chemical reducing agent. It Only a few of these systems have been reported in the literature describing the use of nanoporous polymers arising from diblock copolymers for heterogeneous supported catalysis. Our group has reported on the synthesis of diblock PS-b-PLA copolymers from a heterobifunctional initiator containing a disulfide bridge through the controlled radical polymerization of styrene via atom transfer radical polymerization (ATRP) and subsequent ring-opening polymerization (ROP) of D,L-lactide [54]. After the successful generation of both PS and PLA blocks, the resulting diblock copolymers have been submitted to a mechanical orientation through a channel die processing. The nanoporosity was generated via the selective cleavage of the disulfide bridge present between both blocks in the presence of triphenylphosphine (TPP), a chemical reducing agent. It was envisioned how such pores could be decorated with Au NPs, and it was considered the use of the resulting composite materials for supported catalysis. Thus, the advantage of the presence of thiol functions at the pore surface was taken to immobilize Au NPs through reduction of impregnated HAuCl 4 salts. It is worth mentioning that the presence of thiol functions also allowed for grafting allyl functionalized poly(ethylene oxide) (PEO) oligomers through thermally initiated thiol-ene reaction so as to change the pore surface chemical nature, demonstrating that the wettability of the pore surface (hydrophobicity vs. hydrophilicity) could be easily tuned using such smart chemistry. Further, the Au NPs adsorbed at the pore surface of such nanoporous polystyrenes were used as supported catalysts for the reduction of 4-nitrophenol (Table 1) and showed a reduction yield of 68% after 1 h reaction. The supported heterogeneous catalysts were recycled five times in a row and did not show any loss of efficiency (reduction rates between 64 and 71%).
Acetal junction was also envisioned by our group to prepare PS-b-PLA diblock copolymers [55]. To that end, p-hydroxybenzaldehyde was first reacted with α-bromoisobutyryl bromide to give the corresponding ester intermediate that was in a second time acetalized with glycerol under acidic catalysis to afford an acetal-containing heterobifunctional initiator. PS-b-PLA diblock copolymers were prepared from this initiator through ATRP of styrene and successive ROP of D,L-lactide. A solution of this copolymer in THF was then spin-coated onto a silicon wafer, and the orientation of the block copolymer structure was performed via solvent vapor annealing of the resulting film. Upon removal of the sacrificial block by selective cleavage of the acetal junction in acidic conditions (trifluoroacetic acid in ethanol), an aldehyde functionalized porous polystyrene was obtained. The presence of such chemical moieties at the pore surface notably allowed for the successful functionalization through reductive amination with amines, e.g., tetraethylenepentamine (TEPA).
Further, the presence of amine functions at the pore surface permitted to adsorb in situ generated Au NPs by hydride-mediated reduction of Au 3+ cations retained at the pore surface by such chemical groups, as shown in Figure 5B. Such hybrid systems also enabled the conversion of phenyl boronic acid into biphenyl through a C-C cross-coupling reaction ( Table 1). It was also implemented for the hydride-mediated reduction of p-nitrophenol into the corresponding amine (Table 1). More interestingly, one-pot cascade reactions consisting of the two previous successive reactions were successfully achieved with those hybrid materials. Experimentally, m-nitrophenylboronic acid was first submitted to C-C cross-coupling in the presence of the supported catalyst, and the 3,3 -dinitrobiphenyl was reduced when NaBH 4 was added to the reaction mixture as the hydride source to afford as-obtained 3,3 -diaminobiphenyl. This example clearly paved the way towards the development of efficient hybrid systems for catalytic cascade reactions. Recently, another strategy based on a diblock copolymer macromolecular architecture based on the boronate ester junction [82] was developed by Bakangura et al. The convergent synthesis of such macromolecules relied on a final coupling of boronic acid-and nitrocatechol-appended polystyrene and poly(ethylene oxide) homopolymers. After spin coating of these diblock copolymers, solvent vapor casting and etching of the sacrificial PEO block in EtOH supplemented with TFA, nanoporous polymers presenting well-ordered close-packed cylindrical nanopores oriented perpendicularly to the silicon wafer supports and either catechol or boronic acid functionalities were obtained. Such nanoporous polystyrenes could be implemented for the capture of carbohydrates/diols or for the supported catalysis of molecular reactions.
In Capillary Hybrid Systems
Flow-through chemistry has gained a tremendous interest in the last decade as it offers some undeniable advantages over more classical solution chemistry. It allows notably to process chemical reactions with a catalyst immobilized at the pore surface of (polymeric) materials prepared within flow-through systems. Such a flow chemistry has been pioneered in the second half of the 2000s [85], but a plethora of reports are now retrievable in recent literature [86,87].
N-acryloxysuccinimide (NAS)-based monoliths were deeply investigated in-capillary as porous microsystems of choice for the adsorption of Au NPs and the further supported heterogeneous catalysis of flow-through molecular reactions [56]. The preparation of such monoliths relied again on the use of a porogenic solvent that is removed after polymerization, allowing for the generation of the pores, i.e., toluene in this particular case. The available NHS-activated ester functional group is well-known to react with nucleophiles, such as amines, for instance. In this work, the activated esters were reacted with propargylamine, as shown in Figure 8. A second UV-triggered thiol-yne radical addition with cysteamine allowed for decoration of the pore surface with amine functions. Such a functionalization step could be easily monitored by Raman spectroscopy that allowed for highlighting the disappearance of different characteristic signals of the NHS activated ester group, such as those from imide symmetric and asymmetric stretching at 1785 and 1730 cm −1 , respectively, and from activated ester stretching at 1812 cm −1 . In parallel, the characteristic signal from terminal alkyne was observed at about 2100 cm −1 . The second step notably permitted spatial control of the grafting, as only the irradiated monolith was functionalized. In this way, micropatterning was possible by using photomasks. Amine functions were prone to adsorb Au NPs. A suspension of commercially available citrate stabilized Au NPs was percolated through the NAS-based amine-functionalized monolith. This example, that did not evidence flow-through catalysis, emphasized the photo-triggered strategy implemented in this study. cm -1 . In parallel, the characteristic signal from terminal alkyne was observed at about 2100 cm -1 . The second step notably permitted spatial control of the grafting, as only the irradiated monolith was functionalized. In this way, micropatterning was possible by using photomasks. Amine functions were prone to adsorb Au NPs. A suspension of commercially available citrate stabilized Au NPs was percolated through the NAS-based amine-functionalized monolith. This example, that did not evidence flow-through catalysis, emphasized the photo-triggered strategy implemented in this study. Since then, investigations have been expanded to this kind of in-capillary monolith and especially for the adsorption of Au NPs. In 2015, Khalil et al. developed the use of the same NAS-based monoliths as an initial platform for the grafting of ethylenediamine at the pore surface monitored by Raman spectroscopy [57]. Two distinct approaches were Since then, investigations have been expanded to this kind of in-capillary monolith and especially for the adsorption of Au NPs. In 2015, Khalil et al. developed the use of the same NAS-based monoliths as an initial platform for the grafting of ethylenediamine at the pore surface monitored by Raman spectroscopy [57]. Two distinct approaches were investigated to immobilize Au NPs. Either Au NPs were directly generated at the pore surface of monoliths by a two-step strategy involving, firstly, the percolation of a tetrachloroauric salt solution (HAuCl 4 ), and secondly, the reduction of pore surface-adsorbed Au 3+ cations by a chemical reducing agent, e.g., an aqueous NaBH 4 solution, or colloidal gold was percolated by pumping a commercially available Au NPs suspension (20 nm in diameter) through the monolith. In the latter case, Au NPs, coated with citrate sodium salt, can be adsorbed in a robust fashion at the pore surface, as the carboxylate functions of citrate coating strongly interact with the amines at the pore surface through electrostatic interactions. Different nanogold-catalyzed reduction reactions involving nitroarenes have been investigated, notably with o-, m-, and p-nitrophenol as model molecules. To that extent, various experimental parameters have been investigated by the authors, such as the column length, the concentrations of the reagents, and the flow rate of the reagents solution, so as to highlight the critical parameters to take into account, so as to allow full conversion of the nitro group into the respective amine. It was notably demonstrated that in situ generated Au NPs offered a higher conversion rate than their commercially available counterparts in the same reaction conditions (flow rate, monolithic capillary length, and concentrations).
More recently, Poupart et al. also investigated the influence of the organic ligand grafted at the surface of the in-capillary monolith toward, notably, the immobilized NPs' morphology and their dispersion at the monolith pore surface [57]. To that purpose, ethylene diamine-derived ligands were grafted at the surface of NAS-based highly permeable in-capillary monoliths. After immobilization of in situ generated Au NPs, the as-obtained hybrids, essentially differing on the grafted amine-containing ligands, were compared. Scanning electron microscopy highlighted the crucial role of the grafted amine ligand regarding the morphology, size, and surface coverage of the immobilized Au NPs at the monolith pore surface. Further, such hybrid microsystems were successfully implemented as flow-through microreactors for the catalytic reduction of nitroaromatic compounds into the corresponding amines. It was demonstrated that monolith-adsorbed Au NPs exhibited good catalytic activities in a flow-through process. This study clearly demonstrated the key role of the nature, primary vs. secondary, of the chelating amine in the morphology (shape, size, dispersion) of the supported Au NPs.
Liu et al. also extended the scope of this strategy based on in-capillary monoliths [58]. NAS was again used as the functional monomer for their preparation. In this study, histamine, i.e., a natural compound containing both an imidazole ring and a terminal aliphatic primary amine in its structure, was grafted at the pore surface of such monoliths. The presence of the imidazole rings at the pore surface, but also its protonation in acidic conditions at pH 1, 3, or 6.5, allowed for the robust and efficient immobilization of commercially available 5-, 20-, or 100-nm Au NPs in flow-through conditions. As theoretically expected, the lower the pH of the protonation solution, the higher the gold content immobilized at the pore surface. Indeed, protonation of the imidazole rings afforded ammonium cations to create electrostatic interactions with carboxylate groups of the citrate molecules coating Au NPs. The size of the particles was also demonstrated to be a parameter to control so as to improve both the permeability of the hybrid monolith and more importantly the catalytic efficiency. All prepared samples presented a homogeneous coverage of the Au NPs at the pore surface of the histamine-grafted NAS-based monolith (see, for example, Figure 9A for 20-nm sized Au NPs). However, the 100 nm-sized Au NPs tended to aggregate and clog the interconnected porosity of the monolith, rendering the resulting microsystem inappropriate for flow-through catalytic applications. The catalytic efficiency of the as-obtained monolith-immobilized 5-or 20-nm-sized GNPs have been investigated with nitroarenes (Table 1), e.g.,4-nitrophenol, 2,5-dinitrophenol, 2,4-dinitroaniline, 2,6-dinitroaniline, and 3,5-dinitroaniline. Twenty nm-based systems showed the best catalytic efficiency. However, these results have to be taken with care, as the gold content involved was not the same when comparing 5-and 20-nm-sized NPs.
Hybrid Materials from Membranes
Functional polymer-based membranes can also be interesting candidates for immobilizing metallic NPs as supported catalysts. As the liquid is forced to percolate through the porosity of the membranes, such as through in-capillary microcolumns, the catalytic reaction is forced to occur. A literature survey on the subject suggested that Pd NPs present a great interest for such preformed or home-made systems [27,63,65]. The Poupart et al. demonstrated that Cu NPs could also be used as metallic NPs for nitroaromatic compound reduction (Table 1) [59]. Once again, NAS was used as the functional monomer and was polymerized in fused silica capillaries whose inner surface was preliminarily activated with 3-methacryloxypropyltrimethoxysilane (γ-MAPS). A functionalization step consisting of an amide coupling via dynamic loading of an allylamine solution through the capillary was necessary to decorate the pore surface with alkene moieties. The presence of these unsaturations allowed for the grafting of thiol-containing compounds through a UV-triggered radical thiol-ene addition, e.g., mercaptosuccinic acid. The success of this coupling reaction was successfully monitored by Raman spectroscopy. Finally, the presence of carboxylic acid functions arising from mercaptosuccinic acid favored the chelation of Cu 2+ cations at the monolith pore surface, and their reduction by NaBH 4 enabled them to produce supported Cu NPs. On the other hand, commercially obtained 40 to 60 nm-diameter Cu NPs suspensions were also percolated through the as-prepared microcolumns, as shown in Figure 9B, for the sake of comparison. Reduction of o-nitrophenol was performed on such hybrid monolithic capillaries to assess their catalytic behavior. Reduction yield of 68.5% was obtained for the preformed NPs using a flow rate of 0.3 µL·min −1 , while lower yields were obtained using the in situ generated NPs (40 and 55% for flow rates of 4 and 1.5 µL·min −1 , respectively).
Glycerol carbonate methacrylate (GCMA), i.e., a compound derivatized from a biobased molecule, namely glycerol, has been used as a functional monomer for the preparation of in-capillary monoliths through free-radical copolymerization in the presence of EGDMA crosslinker and AIBN under UV-triggered radical initiation [60]. This monomer presents a carbonate-containing ring that can be then functionalized with nucleophiles, e.g., amines, through nucleophilic attack. After optimization of the reaction conditions, it has been demonstrated that these monoliths give good permeabilities when toluene and dodecanol are used as the porogenic mixture in a 40/60: v/v ratio, giving rise to pores in the 2.2 µm range but also pores with a size of around 50 nm. In situ Raman spectroscopy was used to monitor the surface modification of the monolith after percolation of allylamine. The disappearance of the carbonate ring was notably observed together with the appearance of double bonds at the pore surface of this in-capillary monolith. This notably allowed for envisioning of implementing such microsystems for both immobilization of NPs or selectors for heterogeneous supported catalysis or chromatography purposes, respectively. In this context, mercaptobutyric acid has been grafted through UV-triggered thiol-ene addition; PtCl 4 salt aqueous solution has been then percolated through the monolithic capillary before its NaBH 4 -mediated reduction. In this way, hybrid materials were obtained within capillary microsystems and observed through SEM, as highlighted in Figure 9C. Such functional microsystems were successfully implemented for the NaBH 4 -mediated reduction of p-nitrophenol (Table 1), under a flowrate of 2 µL·min −1 . It is good to notice that such allylamine functionalized in-capillary monoliths could also be modified with 1-octanethiol through thiol-ene addition and then implemented for separating a series of alkylbenzenes (namely toluene, n-propylbenzene, n-pentylbenzene and 1-phenylhexane) at a flowrate of 0.4 µL·min −1 .
Hybrid Materials from Membranes
Functional polymer-based membranes can also be interesting candidates for immobilizing metallic NPs as supported catalysts. As the liquid is forced to percolate through the porosity of the membranes, such as through in-capillary microcolumns, the catalytic reaction is forced to occur. A literature survey on the subject suggested that Pd NPs present a great interest for such preformed or home-made systems [27,63,65]. The design and application in heterogeneous supported catalysis applications of diverse polymer-based membranes exhibiting nanoporosity has also been reported in the literature.
Commercially available polyethersulfone (PES) membranes having pores in the 200 nm range were notably modified by Remigny and Lahitte so as to anchor Pd NPs, the resulting hybrid membranes being implemented in several catalytic reactions, including nitrophenol reduction [61,62], hydrogenation of trans-4-phenyl-3-buten-2-one [86], or Suzuki-Miyaura cross-coupling reactions [27,62,65] (Table 1). The authors notably realized a comparison of these membranes used either in batch mode or under flow-through conditions. Flow chemistry with such membranes allowed for faster reactions. Indeed, while the reactions could be performed within a 10 s range in flow conditions, the batch mode required 6 h for full conversion. Another interesting result was that no byproducts were observed in the flow-through mode.
Other research groups mentioned the use of membranes presenting embedded NPs. Mora-Tamez et al. [66] reported on the implementation of Au NPs immobilized cellulose triacetate-based membranes. To this purpose, membranes composed of cellulose triacetate were prepared in the presence of some plasticizers (i.e., 2-nitrophenyl octyl ether and Adogen ® 364) and semi-interpenetrated with an inorganic phase prepared by sol-gel process in the presence of poly(dimethylsiloxane) and tetraethoxysilane. The authors claimed that the originality of the synthetic strategy arose from the extraction of Au 3+ ions by the membranes through percolation for 5 h and their concomitant in situ reduction with a 0.01 M sodium citrate solution (Turkevitch method). The resulting hybrid membranes were characterized using SEM imaging that demonstrated the presence of Au NPs within the membranes ( Figure 9D) but also by nitrogen adsorption/desorption isotherms (BET method). Specific surface area values ranging from 67 to 137 m 2 ·g −1 and pore volume values from 0.048 to 0.097 mL·g −1 were obtained. Other characterizations, such as transport properties, cyclic voltammetry, and XPS, were performed to determine the gold quantity within the materials. The reduction of p-nitrophenol was successfully achieved through such membranes. Two types of hybrid membranes were compared in this study, one prepared from polymeric inclusion membranes (PIMs), also called AuNPs-PIM, and the other one obtained from starting polymeric nanoporous membranes (PNMs), also called AuNPs-PNM10%. The first one contains no additional porogen while the second one contains 10 wt % dimethyl phtalate as a nanoporogenic agent. The obtained results suggested a better catalytic efficiency of the AuNPs-PIM membranes, with~95% reduction in 25 min. On the other hand, the AuNPs-PNM10% membranes reduced only~87% of the reactant after 120 min. Pd(0) NPs supported on a cellulose acetate membrane (CA/Pd(0)) were found to be highly efficient heterogeneous catalysts for Suzuki-Miyaura cross-coupling reactions between phenylboronic acid and a broad range of iodo-, bromo-, and electron-poor chloroarenes [67]. The synthesis of such a hybrid system started with the preparation of Pd NPs by H 2 decomposition of Pd(acac) 2 salt dissolved in BMI.BF4 ionic liquid at 75 • C for 1 h. Pd NPs with a diameter of 2.7 ± 0.4 nm under the form of a black suspension were obtained. Upon purification and drying, Pd NPs were mixed with a cellulose acetate solution to generate the CA/Pd(0) hybrid membrane. XRD, SEM, electron-dispersive spectroscopy (EDS), and TEM were further used to thoroughly characterize the resulting hybrid membranes. The CA/Pd(0) membrane-assisted coupling reactions were achieved under eco-friendly conditions (Table 1), i.e., phosphine-free and with water as the solvent, and gave good-to-excellent yields, depending on the nature of the haloarene counterpart nature.
Critical Appraisal
As can be seen from all the above-mentioned investigations regarding the preparation of hybrid materials consisting of metallic nanoparticles adsorbed at the pore surface of porous polymers meant for heterogeneous supported catalysis applications, different parameters are to be taken into serious consideration when designing such materials. The first one involves some inherent features of the porous materials, that is to say the porosity, the nature of the polymer, and the nature of functional groups at the pore surface. Indeed, it is well-known that the porosity should allow for efficient mass transfer across the materials to allow the reactant some easy access to the catalytic sites present on the nanometals. It has notably been demonstrated that the presence of two porosity levels in the materials allows for better catalytic efficiency [45]. The chemical nature of the functional groups present at the pore surface can have an impact on the metal chelation but also on the shape/size of the resulting nanoparticles when the in situ strategy is implemented. Many studies in this domain have favored the functionalization of the pore surface with amine groups; this notably allowed for a dense and homogeneous adsorption of metallic nanoparticles through the in situ strategy [57,58]. Other ones rely on the use of thiols or carboxylic acids. However, other chemical functions have been used and also showed some interesting results, such as thiol [44] or carboxylate functions [59], depending on the targeted metal to adsorb at the pore surface. Additionally, the chemical nature of the porous polymer has a crucial role on the morphology of the nanoparticles [89,90]. This was demonstrated by growing silver nanoparticles from glass surface-grafted polymer brushes. In that case, sphere-like shape, nanorods, and weakly branched dendritic nanostructures could be obtained as a function of the polymer chains tethered at the surface, i.e., constituted of 4-vinylpyridine, (oligo(ethylene glycol)ethyl ether methacrylate, or a mixture of both monomers. Finally, the shape of the nanoparticles has been shown to play an important role on their resulting catalytic efficiency [91]. This was notably demonstrated by comparing the catalytic performances of silver nanoparticles displaying different morphologies, namely truncated triangular, cubic, and near-spherical silver nanoparticles. The authors observed a 15-fold increase in the catalytic activity of those nanoparticles regarding the oxidation of styrene dependent of the nanoparticle shape.
Conclusions
To conclude, hybrid materials consisting of metallic NPs immobilized at the pore surface of various porous polymer-based materials have shown a strong potential in heterogeneous supported catalysis of several molecular reactions, among which are the reduction of different chemical species used in industry that are considered major pollutants. Among them, one can find nitroaromatic compounds and dyes but also metallic cations or even uranium ions. Other molecular reactions that can lead to high value-added products have also been the subject of numerous investigations from different research groups with heterogeneous supported catalysts, such as hydrogenation and C-C homo-or crosscoupling reactions. Some of these hybrid materials can even undergo one-pot cascade reactions. Different features of such nanoparticle supports are very important to take into consideration, such as the chemical functionality of the material surface, as well as the pore size and polymer morphology. On the other side, the synthetic strategy of nanoparticles is also of paramount importance as it will dictate their size and their dispersion over the porous support. More importantly, these catalytic hybrid materials can be recycled without significant loss of their catalytic efficiency. Thus, these research investigations should pave the way towards more and more efficient catalytic systems. Thus, the immobilization of organometallics or organocatalysts at the pore surface of these porous polymers may lead to the preparation of smart materials that can be used for asymmetric synthesis, for instance. | 2022-11-06T16:17:56.643Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "a51f28a2e451555c5b6ba45891426c9cef551082",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/21/4706/pdf?version=1667477233",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adf486fb326e3268a9de9f14a253518794038e9c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
158103242 | pes2o/s2orc | v3-fos-license | Improved Teaching Model of Ideological and Political Courses in Chinese Colleges and Universities
The promotion of ideological and political theory courses should respect and understand students’ different visions and appeals, uniting the successful learning and wonderful life, and making the Chinese Dream education deeply rooted in undergraduates’ mind. Teaching model is one of the important ways to realize the goal of cultivating high-quality talents. The improved teaching model research focuses on exploring the course of Morality and Law by designing targets, paths, content, methods and evaluation ways, to enhance the efficiency and effect. The research group made a questionnaire survey in Beijing among students in 6 colleges and universities and staff in 67 enterprises covering the fields of state-owned, private, sino-foreign joint ventures or foreign-funded companies. The group also made interviews with counselors and deans. By careful analysis, the team summarized an "1133" teaching model demonstrating new definition of the target, methods, and evaluation. The results should be connected with teachers’ appointment, promotion and rewards to improve the main channel of ideological and political education.
Introduction
Ideological and political theory courses are the main channel of ideological and political construction which the CPC central committee has been very concerned about.Strengthening and improving ideological and political education in guiding ideology, basic principles, basic requirements, main ways and methods is more and more important in the 13 th Five Year Plan.A series of documents have been published for the curriculum plan and deployment under the new situation.The course should focus on what students really love, bringing lifelong benefits.The construction of ideological and political theory courses should focus on how to improve the effective realization of the educational objectives.The mission of vocational colleges is to train high-quality and highly skilled personnel.High quality, as the core, includes higher moral quality and clearer sense of law.
Data Analysis
The survey data is analyzed by SPSS software, which suggests that further curriculum reform is very necessary.Based on statistics of student questionnaire survey and comparative analysis, the present situation is summarized as follows.40.4% students are very satisfied with the course and 1.9% students are satisfied with it.57.2% students want to improve their own thoughts and are attracted to the course content; 42.8% students learn the course to cope with the examination.However, 23.1% students think the course is a little helpful or useless.Most students hope teachers adopt the way of teaching.15.9% students want to take a purely theoretical teaching, 37.2% want topic discussion-based teaching, 40.9% want case analysis.Pure theoretical teaching is unpopular.The effect of the video teaching method is arousing interests in learning.40.1% students think video effect is very good, 36.7% think it is good.The task-driven method is improving the ability to learn.71.4% students think of task-driven teaching as a method to improve learning ability.It can be seen that 59.3% students like surfing the Internet to find information in class.Group discussion achieves the best learning effect.47.3% students in class seldom listen to the lecture.Students like to participate in the practice of course activities, with 39% loving off-campus visit.With declining cognitive ability of vocational college students, it is necessary for teachers to carry out a thorough reform to explain the theory and to arouse the interest in study.
The Establishment of Scientific World View
In view of the above problem analysis, course development is very necessary.First of all, political theory course is intended to enable students to establish a scientific world view, by providing basic Marxist views, positions and methods of understanding social development and the laws.That may result in a love for the motherland, for the people, and for the socialism with Chinese characteristics.To achieve such a purpose, practices under the theoretical validation and reproducibility will take place, becoming the driving force and source of theoretical innovation.
The connection to practice education
According to Marxist theory, qualitative change is advancing through continuous integration with practice and times.In this regard, teachers in the classroom are responsible for integrating theory with practice.Young students connect their understanding with the reality to be thankful to the development and changes of society.Practical education is to train students to know the community and to understand the country by effective means of ideological and political theory courses, which outline the most important items indispensable.With the deepening of China's educational reform, emphasizing the combination of theory and practice has become a consensus, which is feasible due to the environment of the society as a whole.
The broken ivory tower
The previously so-called "ivory tower" in colleges and universities has been broken.Teachers' engagement in practice with a strong scientific research has become a trend.Under such circumstances, the practice of ideological and political theory courses can be described as a flow to reflect the development of the society.Many institutions have added the practice of extra-curricular links outside the classroom, which benefits ideological and political class education for guidance in practical fields under an objective condition.Whereas, it is also a major challenge.It requires teachers to change the traditional teaching philosophy of education which simply focuses on talking about textbooks.Instead, teachers shall provide in-depth understanding of the society and social realities of life, using theory to guide practice and testing theory with practice.In this process, students discuss with peers and the teachers who organize theoretical preparation.Inspired and guided by teachers, students become so motivated to learn and to think about social phenomena and problems by themselves.
The practice session
Classroom teaching can only be regarded as laying a foundation and groundwork for teaching practice.Success needs such efforts as available time, appropriate topics and appropriate practice of place.The practice session encourages students to go outside the classroom environment in extra-curricular time, so the timing must be expedient.In general, the annual winter and summer vacations are proper for activities of social practice.Colleges and universities adjust the examination procedures and time for the practical performance of students and the completion of the organizational operation.The ideological and political theory course can take advantage of this opportunity to encourage students not only to participate in professional practice, but also to understand and meet the requirements of political theory course.Colleges and universities shall select the best practice themes and practice venues to produce the desired effect, or even the opposite trend, and seize this outstanding achievements in the community or problems.It is also necessary to select several representative industries or regions as a base for the practice, which could facilitate contacts and arrangements.Students are encouraged to express their views fully during the course of social practice.
The Comfortable Training
Training involves much more than just talking to people; it requires to pose intelligent questions that inspire students to talk about their thoughts, their work and their concerns.Teachers can use a proactive process to help them upgrade their skills, reach their own solutions and understand their own actions.On occasion of 18th National Congress of the CCP,Chinese Dream has been emphasized in the education for college students.Ideological and political theory course in colleges and universities shoulders the important responsibility of instructing students of Marxist theory and Party's guideline.Thus,the promotion of Chinese Dream into classroom,into teaching material and into students' mind and the implementation of a long-term mechanism is one of the most important tasks for colleges and universities in China.Therefore, reform in education and teaching does improve the attractiveness and appeal of Chinese Dream.In order to broaden teachers' horizon, investigation and academic exchanges among teaching staff should be highly encouraged to improve teaching methods, explore teaching methods that comply with teaching principles and students' interest and inspire students' desire to think with preferable and favorable examples and language.It is also necessary to activate the atmosphere in classrooms with unique teaching strategies.
The help of information means
In order to provide enough interesting training, teachers take three information means to mobilize students' learning interest.The first is the improvement in classroom teaching strategy with simplified and exquisite PPT, which attracts the attention of students.Students learn professional knowledge by virtue of PPT and video effectively.The second is the use of computer network teaching platform which realizes online teaching, answering questions and submitting homework.The third is use of communication network, such as WeChat, QQ and Fetion, stimulating students' enthusiasm on main theory content.These means help to facilitate students' class activities with vivid animation and music.Taking online education, instead of in classroom is a change for many people.So teachers are required to adopt strategies to convert traditional to online training.
Strengthening practice in training
Teaching in classroom must be combined with more and more practice.Through the survey, it can be found that students want the teacher to take practice teaching.Due to the limited capital, off-campus practice cannot be carried out too frequently.Practice teaching take the following three ways.The first is the classroom practice, using the typical case to illustrate the theory to solve a series of problems.The second is campus practice, organizing campus research, interviews, and moot court.The third is outside-school visit in patriotism bases, anti-drug bases and the courts to strengthen the book knowledge.So the task-driven teaching method increases the proportion of practice inside and outside the classroom.The method, with the core of "teaching, learning and doing", can improve students' practical application ability and also improve the teaching effect of teaching in the practice training bases.
New teaching model
The existing teaching model is "113", which has 1 subject, 1 guidance, and 3 practice models.Along with the rapid development of informatization, the generation born after 1995 is more interested in the information way to accept knowledge.Therefore, based on the 113 model, teachers adjust the teaching mode to "1133" to highlight 3 kinds of informationization means using three kinds of practice teaching models, closer to vocational students' characteristics.The purpose is to improve students' practical application ability.Instead of the evaluation on theoretical knowledge, examinations need further changes to attract attention and improve the process of learning behavior and application ability.And teachers are required to further develop students' participation, to arouse the enthusiasm of students, and to form a good habit.With the popularity of smart phones, it is feasible to make full use of network access to course related information, giving assignment on online platforms and stimulating interaction between teachers and students.
Supporting materials
In order to improve classroom efficiency, teachers have to apply some new and special teaching materials for students.A complete set of teaching guidance can help students grasp key points and difficulties of learning.Five modules to be a better man obeying morality and law help students to be a professional person in a series of tasks.By clarifying learning targets, students digest and absorb the teaching material content through exercises in order to improve the effectiveness as well as quality goals.Self-discipline students are more able to cooperate with other classmates.The training make them competent and confident.
Conclusion
The teachers' efforts are the vital part of improving the attraction and appeal of college ideological and political theory courses.The courses can play an important role in the process of constructing socialist harmonious society.Therefore, it is necessary to strengthen and improve new ways and new methods to conduct college ideological and political education well.Primarily, the 1133 model is offered as an alternative method of ideological and political teaching model to provide a novel approach for the analysis of ideologies, through examining their internal conceptual morphology.The result is to interpret ideologies as the particular combinations of meaning from an indeterminate range of meanings at the disposal of a society.Hence, ideologies are located at the meeting point between logic (internal constraints on their permutations), culture (the impact of social practices and events over time and space) and the regularities of morphological patterning that they display.Besides, it is feasible to discover and set fine examples and commend the model teachers to enhance their sense of responsibility and honor.Thirdly, it is required to increase the assessment and examination of students' learning progress.The assessment should be integrated with the contents of Chinese Dream and their behaviors with innovative forms and methods.Finally, attention should be paid to the assessment of design science and its feedback.Advices and opinions should be gathered vastly to promote the "three entries" work of Chinese Dream continuously and steadily on the basis of people-oriented and talent-based teaching. | 2018-12-12T02:01:32.695Z | 2018-09-07T00:00:00.000 | {
"year": 2018,
"sha1": "ee6cc8efae3402e43674a87ba5de21ff10d43c39",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23977/aetp.2018.21018",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ee6cc8efae3402e43674a87ba5de21ff10d43c39",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
268980255 | pes2o/s2orc | v3-fos-license | Shrinkage Testimator for the Common Mean of Several Univariate Normal Populations
: The challenge of combining two unbiased estimators is a common occurrence in applied statistics, with significant implications across diverse fields such as manufacturing quality control, medical research, and the social sciences. Despite the widespread relevance of estimating the common population mean µ , this task is not without its challenges. A particularly intricate issue arises when the variations within populations are unknown or possibly unequal. Conventional approaches, like the two-sample t-test, fall short in addressing this problem as they assume equal variances among the two populations. When there exists prior information regarding population variances ( σ 2 i , i = 1,2 ) , with the consideration that σ 21 and σ 22 might be equal, a hypothesis test can be conducted: H 0 : σ 21 = σ 22 versus H 1 : σ 21 ̸ = σ 22 . The initial sample is utilized to test H 0 , and if we fail to reject H 0 , we gain confidence in incorporating our prior knowledge (after testing) to estimate the common mean µ . However, if H 0 is rejected, indicating unequal population variances, the prior knowledge is discarded. In such cases, a second sample is obtained to compensate for the loss of prior knowledge. The estimation of the common mean µ is then carried out using either the Graybill–Deal estimator (GDE) or the maximum likelihood estimator (MLE). A noteworthy discovery is that the proposed preliminary testimators, denoted as ˆ µ PT 1 and ˆ µ PT 2 , exhibit superior performance compared to the widely used unbiased estimators (GDE and MLE).
Introduction
The problem of combining two unbiased estimators arises frequently in applied statistics, where it has important implications in a wide range of fields, from quality control in manufacturing to medical research and social sciences [1][2][3].For instance, in the context of manufacturing, it is essential to ensure that the means of different production lines are within specified quality standards.By using the common mean inference, you can determine whether the means of these production lines are within specified quality standards.If one line's mean falls outside the acceptable range, it could signal a quality issue.In clinical trials, researchers often need to compare the effectiveness of different treatments or drugs.The common mean approach can help determine whether a particular treatment yields statistically different results in terms of patient outcomes, such as recovery times or symptom alleviation.
The technique of combining and analyzing data from several independent studies on a specific topic or research question is referred to as meta-analysis [3].The goal of meta-analysis is to obtain a more accurate and reliable estimate of the overall effect size or treatment effect than what can be achieved by any individual study alone [3,4].It provides a systematic and quantitative approach to synthesizing evidence from various studies, allowing researchers to draw more robust conclusions and make generalizations [3].
A well-known context of this problem occurred when Meier [2] was asked to draw inferences about the mean of albumin in plasma protein in human subjects based on results from four experiments [2], shown in Table 1.Another scenario happened when Eberhardt et al. [5] had results from four experiments about nonfat milk powder and the problem was to draw inferences about the mean Selenium in nonfat milk powder by combining the results from four methods (Table 2).Despite the broad applicability of the common mean, µ, estimating it is not without difficulties.One of the most difficult problems emerges when the population variations are unknown or maybe unequal.Traditional approaches for addressing this issue, such as the two-sample t-test, are insufficient since they assume equal variances and are not designed for combining and analyzing data from several independent studies on a specific topic or research question.
To formulate the present problem, we assume only that there are two normal populations with a common mean, but with unknown and possibly unequal variances σ 2 1 , . . ., σ 2 k > 0. Let us assume that we have independent and identically distributed (i.i.d) observations X i1 , . . ., X in i from N(µ, σ 2 i ), i = 1, 2 and define X i and S i are given as where Note that these statistics, {X i , S i , i = 1, 2}, are all mutually independent.Again, it can be noted that {X 1 , S 1 , X 2 , S 2 } are minimal sufficient statistics for (µ, σ 2 1 , σ 2 2 ) but are not complete.As a result, one cannot obtain the uniformly minimum-variance unbiased estimator (UMVUE) if it exists using the standard Rao-Blackwell theorem on an unbiased estimator for estimating the common mean µ.
The natural meta-analysis question now is the problem of combining several estimates of an unknown quantity to obtain an estimate of improved precision.A similar problem arises in the analysis of incomplete block experiments.The "intra-block" and "inter-block" estimates of varietal means have different variances, and the recovery of "inter-block information" is an attempt to combine these estimates in the most efficient manner.In the case when the two population variances are completely known, the common mean µ can easily be estimated as which is the UMVUE, the best linear unbiased estimator (BLUE), and the maximum likelihood estimator (MLE) with a variance given as In our present problem, the population variances are unknown and possibly unequal.The most appealing unbiased estimator of μ is the Graybill-Deal estimate (GDE) [6], given as with where For the two-sample case, the GDE [6] showed first that an unbiased estimator has a smaller variance than either sample mean provided that both sample sizes are greater than 10.Since then, several papers have been written generalizing and extending their findings [7][8][9][10][11] and the references therein.On the other hand, Meier [2] suggested a method for setting an approximate confidence interval for µ centered at μGD .Furthermore, [12,13] developed approximate confidence intervals centered at μGD .The properties of such estimators have received a lot of attention in the literature.We would like to highlight the contributions of Kifle et al. [1], Sinha et al. [3], Sinha [14], Hartung [15], and Krishnamoorthy and Moore [16] in particular.
Even though many generalizations of μGD have been proposed in recent years, it still commonly remains one of the central figures in statistical modeling and methods in meta-analysis due to its natural appeal.We may have prior information that the variances σ 2 1 and σ 2 2 may be equal.Then, we can test the hypothesis H 0 : and then estimate the common mean μ of these two independent normal populations depending on the outcome of this test.Stein [17] introduced and thoroughly discussed the preliminary test shrinkage estimator.His work had a profound impact on the field of statistical estimation, particularly for the common mean problem with unknown variances.His approach has inspired various developments and applications in statistics and has become a foundation for the use of shrinkage estimators in modern statistical practice.
Thompson [18] proposed a shrinkage technique, given as for improving the existing estimator of parameter θ and estimating the mean, which lowers the mean square error (MSE) of the UMVUE of the mean of a population, is considered.It was noted that the shrinkage estimator outperforms the usual estimator if the guess value of q is chosen in a way that aligns with reality.Therefore, rather than considering q as a fixed constant in the shrinkage estimator, one should consider it as a weight that falls between 0 and 1.In this case, q can be treated as a continuous function of some relevant statistics, with the expectation that its value will drop monotonically as (θ − θ 0 ) increases.Other researchers, like, Walker, Schuurmann, and Raghunathan [19], also proposed a testimator for the mean of a normal distribution.It was further noted in the literature that when prior information is available, the shrinkage estimators for the parameters of various distributions perform better than the usual estimators in terms of the mean square error when the estimated value is close to the true value [18,20,21].
If we assume that the prior knowledge about population variances (σ 2 i , i = 1, 2) is available and that the variances σ 2 1 and σ 2 2 may be equal, we can test the hypothesis The first stage sample is used to test H 0 and if we fail to reject H 0 , we feel comfortable in using prior knowledge (having tested it) to estimate the common mean µ.However, if H 0 is rejected, we discard our prior knowledge and obtain a second sample to make up for the loss of the prior knowledge and estimate the common mean µ using GDE or MLE.This type of adaptive estimator based on a preliminary test has been used by many researchers [22,23].
Estimating and evaluating hypotheses about the common means of different univariate normal populations is an important problem.This study attempts to propose a preliminary testimator for the common mean µ with unknown and unequal variances.The preliminary testimator thus produced will be studied for its behavior when the expressions of its bias, MSE, and Relative Efficiency (RE) are determined and their performance will be evaluated.The proposed method incorporates preliminary testing to assess the equality of population variances before estimating the common mean µ.When significant differences in variances are detected, the preliminary test shrinkage estimator adjusts the weight assigned to each sample mean, shrinking estimates from populations with smaller sample variances towards the overall mean.This is the main motivation behind this revisit to the common mean µ problem and filling certain gaps analytically as well as computationally while proposing a preliminary test shrinkage estimator.
Materials and Methods
It is natural to test a null hypothesis with the prior, uncertain non-sample information in hand.This is followed by the testimator.A testimator is a two-step estimator that estimates an important parameter based on the results of a preliminary test.For estimating the common mean µ, we consider the hypothesis We define our proposed preliminary testimator for the common mean µ as where μUE is the unbiased estimator of µ.We may rewrite the above equation as where I(•) is the indicator function, defined as The more this ratio deviates from 1, the stronger the evidence is for unequal population variances.To find the F critical values (c 1α and c 2α ), we look at two choices: Choice 1: Equal tail probability by fixing c 1α and c 2α to α 2 .The hypothesis that the two variances are equal is rejected if It is not unique in choosing c 1α and c 2α for the given (1 − α).In order to make it unique, we minimize the length between the upper and lower bounds as So, the critical values may be found by minimizing
The GDE of the Common Mean
The famous and most widely used estimator is the GDE [6] estimator, which is an unbiased estimator of the common mean µ which is uniformly better than either X i , i = 1, 2 in that n i , i = 1, 2 are both larger than ten.We define our proposed testimator for the common mean as The μGD in the equation above, can be expressed as μGD = Then, we may also rewrite µ Grand in the above equation as Notice that B is the function of n 1 , n 2 .Then, we revise our proposed testimator μPT 1 as where I(•) is the indicator function, defined as
Bias of Preliminary Testimator μPT 1
The bias of the proposed preliminary testimator is equal to E( μPT 1 ) − µ, where Our proposed preliminary testimator μPT 1 is an unbiased estimator for a common mean µ.
Mean Square Error of Preliminary Testimator μPT 1
The MSE of μPT 1 can be expressed as Without loss of generality, µ = 0. Therefore, our above equation for MSE( μPT 1 ) can be defined as
The MLE of the Common Mean
Pal et al. [24] revisited the common mean problem by elucidating the structure of the MLE, and comparing it with the GDE.It was found that the MLE has better overall performance than the popular GDE.It can be noted that the MLE for the common mean µ does not have a closed expression when σ 2 i is unknown and, as a result, the exact sampling distribution is impossible to derive [24].
The MLEs of μML , σ2 1(ML) , and σ2 2(ML) are defined as where Notice that both σ2 1(ML) and σ2 2(ML) are functions of S 1 , S 2 , and D 2 .Thus, it is easy to write μML as ).This is an unbiased estimator of the common mean µ.
It should be noted that numerical iterations should be used to obtain the μML of the common mean µ because the system of equations may have multiple solutions, and one must determine which of these solutions results in the ML estimate that truly provides the global maximum of the likelihood function.
So, let ) and t 2 = (S 2 /n 2 ).The above equations, ( 14) and (15), can be written as Note that β1 > t 1 and β2 > t 2 .Then, the above equations can be simplified as Note that √ D = |X − Y| > 0.Then, Equation 18 can be written as Using Equation ( 20) in (19), we obtain We then define our proposed preliminary testimator for the common mean as It is easy to write µ Grand = {n 1 D/(n 1 + n 2 )} + X 1 .Our proposed testimator μPT 2 can be written as 2.2.1.Bias of Preliminary Testimator μPT 2 Bias of the proposed preliminary testimator is equal to E( μPT 2 ) − µ, where The MLE μPT 2 is the unbiased estimator for the common mean µ.
Mean Square Error of Preliminary Testimator μPT 2
The MSE of μPT 2 can be expressed as
Relative Efficiency
The expression of efficiency of μPT 1 and μPT 2 relative to the μGDE and μMLE , respectively, is defined as below: R
Bias and Mean Squared Error
We will now examine how well the suggested preliminary testimator performs in comparison to choices 1 and 2, bias and MSE.After that, we also consider the performance of the testimators μPT 1 and μPT 2 by computing the RE.In order to attain a significant level of accuracy, each simulated bias and MSE value was obtained using Q = 10 5 replications, making the simulation incredibly large.It can be noted that the MSEs and REs of the proposed testimators μPT 1 and μPT 2 are all functions of n 1 , n 2 , and δ = σ 2 1 /σ 2 2 .Out of these parameters, n is the sample size and δ is the guessed value of the parameter used in the suggested preliminary testimator.These massive computations were performed using R (version 3.6.2) and R Studio (version 1.3.959)[25,26].The algorithm for our proposed testimators of the common mean µ is defined as: 1.
Select two positive integers n 1 and n 2 .
3.
Test H 0 : 2 at significance level α using F-test statistic in Section 2 for H 0 versus H 1 .
4.
If we fail to reject H 0 , we take the estimator of μPT = µ Grand .However, if H 0 is rejected we take the estimator of μPT as μGD or μML .
5.
The performance of this proposed estimator is evaluated using the simulated bias as Tables 3 and 4 show the variation in the values of δ for equal sample sizes n 1 = n 2 = n (say) < 25 and ≥ 25.The bias values of the simulation varied from −0.0027 to 0.0018 for the μPT 1 and μPT 2 values.The simulated biases were found to be very close to zero, indicating that the proposed testimators are indeed unbiased estimators for the common mean µ.The values of μPT 1 were more stable than the μPT 2 values in the sense that while the MSE of the simulation varied from 0.3634 to 0.0002 for the μPT 1 values, the range of the MSE for the μPT 2 values was 3.1211 to 0.0000.This was anticipated as μPT 2 is the result of solving a set of non-linear equations, which could include just a small amount of computational errors in the overall sample variation.As δ increased, it may be noted that the MSE in general decreased.Furthermore, it can be observed that the μPT 2 was better than μPT 1 for extreme values of δ.Moreover, choices 1 and 2 were discovered to be extremely near to one another, suggesting that there is most likely little difference in these two test results.For the sample sizes, n 1 and n 2 are drastically different from each other (i.e., 0.2 < n 1 /n 2 < 5.0); the MSE curves of μPT 1 and μPT 2 cross each other only once (from small values of δ to large, or vice versa), as shown in Table 5.As δ increases, it may be noted that the MSE in general decreases and μPT 2 is certainly getting better than μPT 1 .In some cases, for δ in the middle (i.e., δ = 0.5), there may not be any statistical difference between the two simulated MSE; but for δ too small the μPT 1 is certainly better than the μPT 2 .Moreover, for values of δ that are too small, choices 1 and 2 are discovered to be extremely near to one another.However, for large values of δ, choice 2 is certainly better than choice 1.
Figures 1 and 2 present a summary of the RE results for the proposed testimators across varying values of δ, with n 1 fixed at 15.It is noteworthy that as sample size 2 (n 2 ) increases, the RE generally rises.Specifically, for μPT 1 , the RE typically increases initially but then declines as δ increases, as depicted in Figure 1.Conversely, for μPT 2 , the RE generally ascends with increasing δ values, which is also illustrated in Figure 2.
Figures 3-5 outline the RE outcomes for μPT 1 across different values of δ and under scenarios of unequal and equal sample sizes.Notably, when δ > 0.5, the RE remains nearly constant across various significance levels (α) for μPT 1 .Again, it can be noted that when δ = 1, the RE initially reaches a maximum magnitude and decreases for further increases in δ.Conversely, for μPT 2 , when n 1 is larger than n 2 and has an equal sample size, the RE remains consistent across different α levels, as illustrated in Figures 6 and 7.However, in cases where n 1 is less than n 2 , the RE varies across different α values, with α = 0.1 exhibiting the highest RE values, as illustrated in Figure 8.
Asymptotic Normality
The asymptotic distribution of the estimated parameter μPT 1 can be derived from the following facts.In cases where the value of F lies within the range of c 1α and c 2α , it is evident that it follows a normal distribution with mean µ and variance τ 2 , and it is independent of F. However, if the value of F is less than c 1α or more than c 2α , which occurs asymptotically due to the inequality of σ 2 1 and σ 2 2 , the asymptotic distribution of μGD is also a normal distribution with mean µ and variance τ 2 .Similarly, the asymptotic distribution of μPT 2 can be derived from the fact that the MLE of μML converges to a normal distribution with mean µ and variance τ 2 .Thus, the asymptotic distribution of μPT 1 ∼ N(µ, τ 2 ) and μPT 2 ∼ N(µ, τ 2 ) for a large sample size n, where n 1 = n 2 = n and τ 2 = {1/n}{(σ 2 1 σ 2 2 )/(σ 2 1 + σ 2 2 )} = θ 2 /n, is presented in Appendixes A and B. The coverage probability, determined by 100,000 replications, is above 95% for both proposed testimators for various σ 2 2 values, except for the case when n = 5, as reported in Table 6.
Application
The Environmental Protection Agency (EPA) of the United States provided a data set to evaluate gasoline quality based on Reid vapor pressure (RVP); more information can be found in the article by Yu et al. [27].Occasionally, an EPA inspector would visit gas pumps in the city, take gasoline samples of a particular brand, and measure the RVP on the spot, which produced cheap and quick measurements.Once in a while, the inspector, after measuring the RVP at the spot, would ship a gasoline sample to the laboratory for a measurement of presumably higher precision at a higher cost.Two types of RVP measurements were taken, X, the field measurement, and the lab measurement, Y, which were referred to as the same chemical (RVP).It was assumed that the measurements X and Y had the common mean µ.Table 7 contains two independent samples of RVP measurements: X, the field measurements, with a sample size of 45, and the lab measurements, Y, with a sample size of 15.The Shapiro tests and Q-Q plots were conducted to assess the distribution of the average field (X) and lab (Y) data.The findings showed that both sets of data exhibited a normal distribution, where X ∼ N(µ, σ 2 x ) and Y ∼ N(µ, σ 2 y ).The sample means were calculated as X = 7.998 and Y = 8.283, with sample variances of s 2 x = 0.131 and s 2 y = 0.245, respectively.
First of all, μGDE , μMLE , and μPT were found to be very close to each other, indicating that there is probably not much difference between these estimators' in estimating the common mean µ (Table 8).We do not want to draw any general conclusions here, but our theoretical and simulated results indicate that our proposed preliminary testimator μPT = 8.140 is viable and could be used for this particular application if we assume that σ 2 x = σ 2 y , as the sample of gasoline of a particular brand is drawn from the same gas pumps in the city.
Conclusions
The estimation of an unknown quantity using data from several independent but non-homogeneous samples has drawn more attention in the last decade.The approach has applicability in numerous fields, as seen by the variety of applications covered in Sinah et al.'s [3] most recent book.This study's primary focus was on the performance of the proposed preliminary testimators μPT 1 and μPT 2 of a common mean with unknown and possibly unequal variances.Our finding is that the proposed preliminary testimators μPT 1 and μPT 2 perform better than the popular unbiased estimators (GDE and MLE) based on Relative Efficiency (RE).The considered testimators were better than the classical estimators especially when σ 2 1 = σ 2 2 .For the balanced case, μPT 1 and μPT 2 using choice 2 seem to uniformly outperform choice 1.It is hoped that this paper will stimulate further research in studying the testimators of the common mean.It goes without saying that a large sample size (n 1 , n 2 ) will be more advantageous for using the proposed testimators.
Figure 1 .
Figure 1.Relative Efficiency of μPT 1 for various δ with fixed n 1 , and sample 2 denotes n 2 .
Figure 2 .
Figure 2. Relative Efficiency of μPT 2 for various δ with fixed n 1 , and sample 2 denotes n 2 .
Funding:
This research was funded by the University Staff Doctoral Programme (USDP) hosted by the University of Limpopo in collaboration with the University of Maryland Baltimore County.Again, the first author acknowledges the financial support from the Research and Innovation Department of the University of Fort Hare.
Table 1 .
Albumin in plasma protein of four different experiments.
Table 2 .
Selenium content in nonfat milk powder using four methods.
Table 6 .
Coverage probability for the proposed testimators for a fixed σ 2 1 and n 1 = n 2 = n.
Table 7 .
Field and lab data on Reid vapor pressure for newly reformulated gasoline.
Table 8 .
Point estimates for Reid vapor pressure for newly reformulated gasoline. | 2024-04-07T15:20:28.470Z | 2024-04-05T00:00:00.000 | {
"year": 2024,
"sha1": "5ea4780c4d39d2743ee348e46fcd9c667799e450",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/12/7/1095/pdf?version=1712320289",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ea8856c7288bf5f4986bc1f060463d28c9396062",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
256497035 | pes2o/s2orc | v3-fos-license | Assessment of the Permeability of 3,4-Methylenedioxypyrovalerone (MDPV) across the Caco-2 Monolayer for Estimation of Intestinal Absorption and Enantioselectivity
3,4-Methylenedioxypyrovalerone (MDPV) is a widely studied synthetic cathinone heterocycle mainly concerning its psychoactive effects. It is a chiral molecule and one of the most abused new psychoactive substances worldwide. Enantioselectivity studies for MDPV are still scarce and the extent to which it crosses the intestinal membrane is still unknown. Herein, an in vitro permeability study was performed to evaluate the passage of the enantiomers of MDPV across the Caco-2 monolayer. To detect and quantify MDPV, a UHPLC-UV method was developed and validated. Acceptable values within the recommended limits were obtained for all evaluated parameters (specificity, linearity, accuracy, limit of detection (LOD), limit of quantification (LOQ) and precision). The enantiomers of MDPV were found to be highly permeable across the Caco-2 monolayer, which can indicate a high intestinal permeability. Enantioselectivity was observed for the Papp values in the basolateral (BL) to apical (AP) direction. Furthermore, efflux ratios are indicative of efflux through a facilitated diffusion mechanism. To the best of our knowledge, determination of the permeability of MDPV across the intestinal epithelial cell monolayer is presented here for the first time.
Introduction
Synthetic cathinones are a vast group of chiral compounds derived from cathinone, an alkaloid found in Khat (Catha edulis) leaves [1]. Chewing fresh khat leaves, which contain many components such as alkaloids, flavonoids, amino acids, glycosides, sterols, vitamins, and minerals, has been a tradition for centuries in some cultures [2]. Derivatives of cathinone emerged in the 1930 s being synthesized, firstly, with a medicinal intent. Methcathinone was one of the first ever synthetic cathinones and was meant to reach the market as an antidepressant. Other examples are pyrovalerone, explored as a treatment for obesity, chronic fatigue, and lethargy, and methylone as a potential antidepressant and anti-Parkinson's agent [3][4][5]. Nevertheless, these compounds never reached the market due to powerful addictive properties [6]. To this day, only the synthetic cathinone bupropion reached the market, being currently used as an antidepressant and a support to smoking cessation [7].
Nowadays, synthetic cathinones are one of the most reported groups of new psychoactive substances (NPS) with new derivatives emerging on the drug market every year with unknown properties [8,9]. Therefore, the development of studies with synthetic cathinones and their enantiomers is crucial to better understand their properties [10,11]. Although enantioselectivity studies are still scarce, differences between enantiomers have been found in several cases [10,[12][13][14].
Drug substances can enter the body through several absorption sites, the gastrointestinal tract being the most important. The absorption through the gastrointestinal tract can be influenced by many factors such as the physicochemical properties of the drug, gastrointestinal motility, and food intake [15,16]. For chiral drugs, differential absorption may happen for the enantiomers, leading to different permeability. Enantioselectivity is not expected for passive diffusion, but it can occur when there is transport-mediated process involved [17]. To better understand drug absorption, permeability studies need to be performed.
The Caco-2 cell line is one of the most used in vitro model for drug intestinal permeability and absorption studies [18]. Caco-2 cells are derived from human colorectal adenocarcinoma and can spontaneously differentiate into a polarized epithelial monolayer of cells ( Figure 1) with tight junctions, microvilli, and several enzymes and transporters expressing most of morphological and functional properties of enterocytes. These characteristics provide this cell line the ability to mimic the small intestine [19,20].
bupropion reached the market, being currently used as an antidepressant and a support to smoking cessation [7].
Nowadays, synthetic cathinones are one of the most reported groups of new psychoactive substances (NPS) with new derivatives emerging on the drug market every year with unknown properties [8,9]. Therefore, the development of studies with synthetic cathinones and their enantiomers is crucial to better understand their properties [10,11]. Although enantioselectivity studies are still scarce, differences between enantiomers have been found in several cases [10,[12][13][14].
Drug substances can enter the body through several absorption sites, the gastrointestinal tract being the most important. The absorption through the gastrointestinal tract can be influenced by many factors such as the physicochemical properties of the drug, gastrointestinal motility, and food intake [15,16]. For chiral drugs, differential absorption may happen for the enantiomers, leading to different permeability. Enantioselectivity is not expected for passive diffusion, but it can occur when there is transport-mediated process involved [17]. To better understand drug absorption, permeability studies need to be performed.
The Caco-2 cell line is one of the most used in vitro model for drug intestinal permeability and absorption studies [18]. Caco-2 cells are derived from human colorectal adenocarcinoma and can spontaneously differentiate into a polarized epithelial monolayer of cells ( Figure 1) with tight junctions, microvilli, and several enzymes and transporters expressing most of morphological and functional properties of enterocytes. These characteristics provide this cell line the ability to mimic the small intestine [19,20]. Some studies have reported differences between the behavior of enantiomers of compounds in the permeability across the Caco-2 cell line [21,22]. For instance, when studying propranolol, a nonselective β-adrenoceptor blocker for the treatment of hypertension and cardiovascular disorders, S-propranolol was found to be the most transported in the apical (AP) to basolateral (BL) direction while R-propranolol was the most transported in the BL to AP direction [21].
After investigating the absorptive properties of Khat alkaloids in vitro, Atlabachew et al. [23] found that the transport across the mucosa of the oral cavity contributes significantly to the overall absorption of Khat alkaloids into the bloodstream and that they seem to be well absorbed into the gastrointestinal tract. Additionally, cathinone displayed significantly greater permeability than the other Khat alkaloids [23].
Although synthetic cathinones are widely studied, the extent to which these compounds cross the intestinal membrane is still unknown. In fact, the permeability across the gastrointestinal tract of synthetic cathinones has been only investigated for the enantiomers of pentedrone and methylone using the Caco-2 model [24]. Moreover, enantioselectivity was observed for both cathinones, R-(-)-pentedrone and S-(-)-methylone being the most permeable compounds [24]. This work focused on 3,4-methylenedioxypyrovalerone (MDPV), one of the most abused synthetic cathinones worldwide [25]. MDPV comprises a heterocyclic structure with a 3,4-methylenedioxi ring and a pyrrolidine ring (Figure 2), making this derivative part of the group of 3,4-methylenedioxypyrrolidinophenones or mixed cathinones [3]. Some studies have reported differences between the behavior of enantiomers of compounds in the permeability across the Caco-2 cell line [21,22]. For instance, when studying propranolol, a nonselective β-adrenoceptor blocker for the treatment of hypertension and cardiovascular disorders, S-propranolol was found to be the most transported in the apical (AP) to basolateral (BL) direction while R-propranolol was the most transported in the BL to AP direction [21].
After investigating the absorptive properties of Khat alkaloids in vitro, Atlabachew et al. [23] found that the transport across the mucosa of the oral cavity contributes significantly to the overall absorption of Khat alkaloids into the bloodstream and that they seem to be well absorbed into the gastrointestinal tract. Additionally, cathinone displayed significantly greater permeability than the other Khat alkaloids [23].
Although synthetic cathinones are widely studied, the extent to which these compounds cross the intestinal membrane is still unknown. In fact, the permeability across the gastrointestinal tract of synthetic cathinones has been only investigated for the enantiomers of pentedrone and methylone using the Caco-2 model [24]. Moreover, enantioselectivity was observed for both cathinones, R-(-)-pentedrone and S-(-)-methylone being the most permeable compounds [24]. This work focused on 3,4-methylenedioxypyrovalerone (MDPV), one of the most abused synthetic cathinones worldwide [25]. MDPV comprises a heterocyclic structure with a 3,4-methylenedioxi ring and a pyrrolidine ring (Figure 2), making this derivative part of the group of 3,4-methylenedioxypyrrolidinophenones or mixed cathinones [3]. The main goal of this work was to investigate the intestinal permeability across the gastrointestinal tract and potential enantioselectivity of MDPV using the in vitro Caco-2 model. To achieve that, the development and validation of an UHPLC-UV method for the detection and quantification of MDPV were performed. To the best of our knowledge, determination of the permeability of MDPV across the intestinal epithelial cell monolayer is presented here for the first time.
Chromatographic Method
The chromatographic conditions of this work were based on Silva et al. [24] with some changes since a different cathinone and column were used. Flow rate was adjusted from 0.15 mL/min to 0.12 mL/min to allow a lower pressure and the mobile phase was adapted from 25 mM NH₄CH₃CO₂:CH₃CN:HCOOH (80:20:0.1 v/v/v) to 25 mM NH₄CH₃CO₂: CH₃CN: HCOOH (75:25:0.1 v/v/v). To obtain the optimal wavelength for the detection and quantification of MDPV, the ultraviolet (UV) spectrum of MDPV was determined ( Figure 3). Four peaks with high absorption were found in the tested wavelength interval. Moreover, UV spectra were determined for the mobile phase and Hank's balanced salt solution with calcium and magnesium [HBSS(+/+)], the buffer of the study, to detect potential interferences with MDPV absorption. The optimum wavelength selected for the detection and quantification of MDPV was 236 nm. The main goal of this work was to investigate the intestinal permeability across the gastrointestinal tract and potential enantioselectivity of MDPV using the in vitro Caco-2 model. To achieve that, the development and validation of an UHPLC-UV method for the detection and quantification of MDPV were performed. To the best of our knowledge, determination of the permeability of MDPV across the intestinal epithelial cell monolayer is presented here for the first time.
Chromatographic Method
The chromatographic conditions of this work were based on Silva et al. [24] with some changes since a different cathinone and column were used. Flow rate was adjusted from 0.15 mL/min to 0.12 mL/min to allow a lower pressure and the mobile phase was adapted from 25 mM NH 4 CH 3 CO 2 :CH 3 CN:HCOOH (80:20:0.1 v/v/v) to 25 mM NH 4 CH 3 CO 2 : CH 3 CN: HCOOH (75:25:0.1 v/v/v). To obtain the optimal wavelength for the detection and quantification of MDPV, the ultraviolet (UV) spectrum of MDPV was determined ( Figure 3). Four peaks with high absorption were found in the tested wavelength interval. Moreover, UV spectra were determined for the mobile phase and Hank's balanced salt solution with calcium and magnesium [HBSS(+/+)], the buffer of the study, to detect potential interferences with MDPV absorption. The optimum wavelength selected for the detection and quantification of MDPV was 236 nm. The main goal of this work was to investigate the intestinal permeability across the gastrointestinal tract and potential enantioselectivity of MDPV using the in vitro Caco-2 model. To achieve that, the development and validation of an UHPLC-UV method for the detection and quantification of MDPV were performed. To the best of our knowledge, determination of the permeability of MDPV across the intestinal epithelial cell monolayer is presented here for the first time.
Chromatographic Method
The chromatographic conditions of this work were based on Silva et al. [24] with some changes since a different cathinone and column were used. Flow rate was adjusted from 0.15 mL/min to 0.12 mL/min to allow a lower pressure and the mobile phase was adapted from 25 mM NH₄CH₃CO₂:CH₃CN:HCOOH (80:20:0.1 v/v/v) to 25 mM NH₄CH₃CO₂: CH₃CN: HCOOH (75:25:0.1 v/v/v). To obtain the optimal wavelength for the detection and quantification of MDPV, the ultraviolet (UV) spectrum of MDPV was determined ( Figure 3). Four peaks with high absorption were found in the tested wavelength interval. Moreover, UV spectra were determined for the mobile phase and Hank's balanced salt solution with calcium and magnesium [HBSS(+/+)], the buffer of the study, to detect potential interferences with MDPV absorption. The optimum wavelength selected for the detection and quantification of MDPV was 236 nm. In the optimized chromatographic conditions, a good resolution was obtained for the peak of MDPV being eluted in less than 5 min ( Figure 4A). In the optimized chromatographic conditions, a good resolution was obtained for the peak of MDPV being eluted in less than 5 min ( Figure 4A). The samples from the permeability assay and calibration curves (all in HBSS (+/+)) were injected into the UHPLC after being filtered; no previous extraction or other treatment was needed.
Specificity
To evaluate specificity, twenty blank samples containing only an HBSS (+/+) solution were injected and analyzed to detect potential chromatographic interferences with MDPV's peak. No interference was observed between 4 and 5 min, the retention time corresponding to MDPV. The chromatogram of one of the blank samples is found in Figure 4B.
Linearity
For the evaluation of linearity, five curves in a concentration range of 0.5-500 µ M, independently prepared in five different days, were used to obtain the average linear regression equation and coefficient of determination (r 2 ). An average r 2 of 0.9999 was found. All linearity data are summarized in Table 1. After injecting three selected calibrators contained in the linear concentration interval (6 (low), 40 (medium), and 300 (high) µM) along with a calibration curve, the experimental concentrations of each calibrator were calculated through the linear regression equation of the curve and accuracy was measured through Equation (2). Percentages between 102% and 109% were obtained (Table 2). The samples from the permeability assay and calibration curves (all in HBSS (+/+)) were injected into the UHPLC after being filtered; no previous extraction or other treatment was needed.
Specificity
To evaluate specificity, twenty blank samples containing only an HBSS (+/+) solution were injected and analyzed to detect potential chromatographic interferences with MDPV's peak. No interference was observed between 4 and 5 min, the retention time corresponding to MDPV. The chromatogram of one of the blank samples is found in Figure 4B.
Linearity
For the evaluation of linearity, five curves in a concentration range of 0.5-500 µM, independently prepared in five different days, were used to obtain the average linear regression equation and coefficient of determination (r 2 ). An average r 2 of 0.9999 was found. All linearity data are summarized in Table 1.
Accuracy
After injecting three selected calibrators contained in the linear concentration interval (6 (low), 40 (medium), and 300 (high) µM) along with a calibration curve, the experimental concentrations of each calibrator were calculated through the linear regression equation of the curve and accuracy was measured through Equation (2). Percentages between 102% and 109% were obtained (Table 2).
Precision
The results obtained for the evaluation of inter-and intra-day precisions of both equipment and method showed coefficients of variation (CV) between 2.88% and 13.87% (Table 3) [27,28]. Table 3. Inter-day and intra-day precision data.
Inter-Day (CV %)
Intra-Day (CV %) (3) and (4), using the slope of the average linear equation (Table 1) and the standard deviation of the analytical signal of 20 blank samples, a LOD of 0.063 µM and a LOQ of 0.190 µM were calculated for this method.
2.3.6. Stability Table 4 contains the peak area variation between the initial injection and final injection for each temperature tested, the obtained percentages being between 3.4% and 12.5%, which were, in general, low peak area variations. Moreover, peak areas were also compared to check for statistically significant differences (p < 0.05) between day 0 and each temperature and also between temperatures. No significant differences were observed.
Cell Viability Assay
In order to find a non-cytotoxic concentration to be used for the permeability assay with the Caco-2 cell line, the cells were exposed to racemic MDPV in a concentration range up to 1500 µM for 24 h and the Neutral Red (NR) assay was performed. The results, in Figure 5, showed no statistically significant differences in cell viability up to the concentration of 750 µM. The three highest concentrations tested resulted in a decrease of cell viability in a concentration dependent manner, so they were not considered for the assay. The concentration of 300 µM was selected for the permeability assay.
Permeability Assay
The trans-epithelial electrical resistance (TEER) was monitored for 21 days after seeding ( Figure 6). Significant TEER values were observed from day 6 (over 500 Ω·cm 2 ). They gradually increased to approximately 1000 Ω·cm 2 until day 18, after which they remained constant until day 21 with a final average value of 1052 Ω·cm 2 . Cells were exposed to 300 µ M of each enantiomer on day 22 after seeding. Samples were collected in the chosen time points (40, 60, 90, 120, 180, 240, and 300 min) from the receiver compartment. After 6 weeks of storage, the validated UHPLC method was used to quantify MDPV present in the samples. Cumulative quantity transported from the donor compartment to the receiver compartment was calculated for each time point. The Results, shown in Figure 7, were expressed as the percentage of cumulative quantity in the initial quantity. A considerable percentage of passage was observed from the first time point (40 min) in both enantiomers for both directions, increasing significantly during the rest of the interval. No statistically significant difference was found between the enantiomers in any of the time points for both directions. . Cytotoxicity evaluation in Caco-2 cells exposed to racemic MDPV (0-1500 µM) for 24 h performed by the NR assay. Results are expressed as mean ± SD from four independent experiments (performed in triplicate). * p < 0.05, **** p < 0.0001 vs. control (0 µM).
Permeability Assay
The trans-epithelial electrical resistance (TEER) was monitored for 21 days after seeding ( Figure 6). Significant TEER values were observed from day 6 (over 500 Ω·cm 2 ). They gradually increased to approximately 1000 Ω·cm 2 until day 18, after which they remained constant until day 21 with a final average value of 1052 Ω·cm 2 .
Permeability Assay
The trans-epithelial electrical resistance (TEER) was monitored for 21 days after seeding ( Figure 6). Significant TEER values were observed from day 6 (over 500 Ω·cm 2 ). They gradually increased to approximately 1000 Ω·cm 2 until day 18, after which they remained constant until day 21 with a final average value of 1052 Ω·cm 2 . Cells were exposed to 300 µ M of each enantiomer on day 22 after seeding. Samples were collected in the chosen time points (40, 60, 90, 120, 180, 240, and 300 min) from the receiver compartment. After 6 weeks of storage, the validated UHPLC method was used to quantify MDPV present in the samples. Cumulative quantity transported from the donor compartment to the receiver compartment was calculated for each time point. The Results, shown in Figure 7, were expressed as the percentage of cumulative quantity in the initial quantity. A considerable percentage of passage was observed from the first time point (40 min) in both enantiomers for both directions, increasing significantly during the rest of the interval. No statistically significant difference was found between the enantiomers in any of the time points for both directions. Cells were exposed to 300 µM of each enantiomer on day 22 after seeding. Samples were collected in the chosen time points (40, 60, 90, 120, 180, 240, and 300 min) from the receiver compartment. After 6 weeks of storage, the validated UHPLC method was used to quantify MDPV present in the samples. Cumulative quantity transported from the donor compartment to the receiver compartment was calculated for each time point. The Results, shown in Figure 7, were expressed as the percentage of cumulative quantity in the initial quantity. A considerable percentage of passage was observed from the first time point (40 min) in both enantiomers for both directions, increasing significantly during the rest of the interval. No statistically significant difference was found between the enantiomers in any of the time points for both directions. Mass balance was calculated using Equation (5) (described in method section), the obtained values being between 98% and 104% ( Figure 8A). Moreover, Papp values were calculated for the AP to BL direction using Equation (6) considering sink conditions (Figure 8B). Average Papp values of 1.8 × 10 −5 and 1.81 × 10 −5 cm/s were found for S-(-)-MDPV and R-(+)-MDPV, respectively, in this direction with no statistically significant differences between the enantiomers. For the BL to AP direction, Equation (7) was used to calculate Papp values under non-sink conditions ( Figure 8B). Average Papp values of 3.4 × 10 −5 and 2.8 × 10 −5 cm/s were found for S-(-)-MDPV and R-(+)-MDPV, respectively, in this direction. In this case, statistically significant differences (p < 0.05) were found between the enantiomers. Additionally, when comparing the directions for each enantiomer separately, significant differences were found, the difference being more significant for S-(-)-MDPV (p < 0.001 for R-(+)-MDPV vs. p < 0.0001 for S-(-)-MDPV). Lastly, efflux ratios were calculated for each enantiomer using Equation (9) ( Figure 8C). Efflux ratios of 1.8 for S-(-)-MDPV and 1.6 for R-(+)-MDPV were obtained with no statistically significant difference between enantiomers. Mass balance was calculated using Equation (5) (described in method section), the obtained values being between 98% and 104% ( Figure 8A). Moreover, P app values were calculated for the AP to BL direction using Equation (6) considering sink conditions ( Figure 8B). Average P app values of 1.8 × 10 −5 and 1.81 × 10 −5 cm/s were found for S-(-)-MDPV and R-(+)-MDPV, respectively, in this direction with no statistically significant differences between the enantiomers. For the BL to AP direction, Equation (7) was used to calculate P app values under non-sink conditions ( Figure 8B). Average P app values of 3.4 × 10 −5 and 2.8 × 10 −5 cm/s were found for S-(-)-MDPV and R-(+)-MDPV, respectively, in this direction. In this case, statistically significant differences (p < 0.05) were found between the enantiomers. Additionally, when comparing the directions for each enantiomer separately, significant differences were found, the difference being more significant for S-(-)-MDPV (p < 0.001 for R-(+)-MDPV vs. p < 0.0001 for S-(-)-MDPV). Lastly, efflux ratios were calculated for each enantiomer using Equation (9) ( Figure 8C). Efflux ratios of 1.8 for S-(-)-MDPV and 1.6 for R-(+)-MDPV were obtained with no statistically significant difference between enantiomers. Mass balance was calculated using Equation (5) (described in method section), the obtained values being between 98% and 104% ( Figure 8A). Moreover, Papp values were calculated for the AP to BL direction using Equation (6) The values obtained for mass balance, P app values, and efflux ratios are summarized in Table 5. Table 5. Data obtained for mass balance, P app values, and efflux ratios expressed as mean ± SD. # p < 0.05 (between enantiomers), *** p < 0.001, **** p < 0.0001 (between directions).
Discussion
To evaluate the potential enantioselective intestinal absorption of MDPV, an in vitro permeability assay was performed using the Caco-2 cell line. The enantiomers of MDPV were separated by a semi-preparative chiral liquid chromatography method [26] with high enantiomeric purity. To analyze the passage of the MDPV enantiomers across the Caco-2 monolayer, a quantification method was needed. Here in, an UHPLC-UV method was selected based on previous work [24] and optimized for the conditions of this work. The optimized chromatographic conditions ( Figure 4A) resulted in a well-resolved peak for MDPV and the analysis of the chromatograms of blank samples detected no interference between the buffer and MDPV ( Figure 4B), which translated to a good specificity for the analysis and quantification of MDPV. All parameters were within recommended limits [27,28]. r 2 values higher than 0.999 showed acceptable linearity in the analyzed concentration interval (0.5-500 µM) (Table 1). Likewise, acceptable accuracy was shown by percentages included in the recommended limits for this parameter (100 ± 15%) ( Table 2) and for precision, CV values below 15% were observed (Table 3). LOQ and LOD values showed that the developed method displayed sensitivity for the detection and quantification of MDPV at a low micromolar concentration range. Thus, a UHPLC-UV method was successfully developed and validated for the detection and quantification of MDPV in HBSS (+/+).
Additionally, the stability of the samples was also evaluated after 6 weeks of storage at different temperatures. The calculated variation in peak area (Table 4) was in general low for every condition tested, meaning that the samples remained stable during the 6 weeks of storage.
The cytotoxic effects of racemic MDPV in Caco-2 cells were first assessed to find a non-cytotoxic concentration to further be used on the permeability assay. If Caco-2 cells were exposed to a cytotoxic concentration of MDPV, it could disrupt the cell monolayer, leading to erroneous results. On the other hand, if the concentration was too low, it could be harder to quantify. Thus, the concentration of 300 µM was selected ( Figure 5).
To the best of our knowledge, apart from our previous work [24], no other studies with the Caco-2 cell line have been reported for synthetic cathinones. Thus, a close comparison was made with this study. For example, racemic pentedrone and methylone displayed no cytotoxic effects in Caco-2 cells in a concentration range up to 2000 µM [24]. Herein, a significant decrease in cell viability was observed starting at the concentration of MDPV of 1000 µM. Thus, MDPV seems to be more cytotoxic than both pentedrone and methylone in this cell line.
To better resemblance in vivo permeability conditions, the formation of a Caco-2 monolayer with good integrity is highly important. TEER values between 500 and 1100 Ω*cm 2 are expected for fully differentiated monolayers [29]. In this work, values above 500 were observed from day 6 and, after 18 days of culture, values reached 1000 Ω*cm 2 and remained constant until the assay (Figure 6). These results are indicative of good monolayer integrity.
No statistically significant differences were found in the transport of the enantiomers of MDPV through the Caco-2 monolayer in both directions in the time points selected (Figure 7). When comparing the results obtained for the enantiomers of pentedrone and methylone (only in the AP to BL direction) [24], the enantiomers of MDPV showed a higher extent of passage across the Caco-2 monolayer using a lower concentration. The presence of the pyrrolidine ring in the structure of MDPV, not present in both pentedrone and methylone, results in a decrease in the polarity of MDPV, which may lead to a greater diffusion of this cathinone derivate across cell membranes [30]. Thus, this structural difference could explain the greater passage of MDPV across the Caco-2 monolayer when compared with cathinones with the absence of that ring.
In this type of study, the calculation of the mass balance can be useful to understand if there were compound losses during the assay. A low mass balance can be caused by adsorption of the compound to the experimental material (plate or filter, for instance), metabolism or retention of the compound inside cells or cell membranes, which can, consequently, lead to errors in quantification of the compound and calculation of permeability coefficients [31]. In this work, values of mass balance were close to 100% ( Figure 8A), suggesting that there were no compound losses during the assay.
Moreover, the calculation of P app values provides an estimation of the permeability of compounds. When P app values are higher than 1 × 10 −6 cm/s, compounds are described as highly permeable substances while when P app values are lower than 1 × 10 −6 cm/s, compounds are considered weakly permeable substances [32]. Since P app values were over 1 × 10 −6 cm/s ( Figure 8B), the enantiomers of MDPV were considered to be highly permeable across the Caco-2 monolayer, which can consequently suggest a high intestinal permeability. Additionally, significant differences (p < 0.05) were found between directions for both enantiomers, this difference being more significant for S-(-)-MDPV (p < 0.001 for R-(+)-MDPV vs. p < 0.0001 for S-(-)-MDPV). Although no significant differences were found in each time point between the enantiomers, significant differences (p < 0.05) were found in the P app values for the BL to AP direction, suggesting enantioselectivity in the overall passage velocity of MDPV though the Caco-2 monolayer in that direction.
The efflux ratios were also calculated for each enantiomer. If higher than 2, this ratio is the first indicator of a potential involvement of an active transport process in the passage across the Caco-2 monolayer [31]. In this work, efflux ratios lower than 2 were found for both enantiomers with no statistically significant differences between them ( Figure 8C). These results suggest that no active efflux should be excepted for the passage of MDPV across the Caco-2 monolayer. Nonetheless, P app values for the BL to AP direction were significantly higher than the P app values for the AP to BL direction and enantioselectivity was found between the enantiomers in P app values in the BL to AP direction. Since enantioselectivity only occurs when a transport protein is involved [17], the involvement of a transport protein could be expected for the efflux of MDPV through facilitated diffusion, a passive-mediated transport that depends on a proton gradient [31]. For instance, some members of the solute carrier (SLC) family of transporters, such as OATP2B1 (involved in xenobiotics transport) or OCT1 (which transports protonated molecules), can be involved in facilitated diffusion and are expressed in Caco-2 cells [33][34][35].
Multiple injections of racemic MDPV were performed using Hexane:Ethanol:Diethylamine (97:3:0.1) as the mobile phase and a flow rate of 1.5 mL/min. Analyses were performed at 25 • C, in isocratic mode under UV detection (254 nm). Hydrochlorides were formed by precipitation of fractions of each enantiomer obtained with HCl on diethyl ether (2 M). Solutions for each enantiomer were prepared and reinjected to determine the e.r. obtained by the relative percentages of the peak areas [37]: where [S-(-)-MDPV] and [R-(+)-MDPV] are the area of the peak of each enantiomer.
Instrumental and Chromatographic Conditions
A Thermo ® Scientific UHPLC with a Thermo ® Scientific Spectra System P4000 pump was used, with a Thermo ® Scientific Spectra AS3000 automatic injector and a Thermo ® Scientific Spectra System UV8000 model DAD. The software used to process the chromatographic data was ChromeleonTM 7.0. A Kinetex ® EVO C18 LC column (1.7 µm, 2.1 mm × 100 mm) connected to a SecurityGuard™ ULTRA pre-column (sub-2 µm, 2.1 mm × 2 mm). Chromatographic analyses were performed at room temperature at a flow rate of 0.12 mL/min using 25 mM NH 4 CH 3 CO 2 :CH 3 CN:HCOOH (75:25:0.1 v/v/v) as the mobile phase. The sample injection volume was 5 µL. Stock standards of 5 mM of MDPV and all solutions were prepared in HBSS (+/+) and filtered through a 13 mm nylon syringe filter with 0.22 µM pore size from Olimpeak™. Calibrators were prepared by the dilution of the stock standards with filtered HBSS (+/+) to final concentrations of 0.5, 1, 5, 10, 25, 50, 100, 250, and 500 µM. To obtain the optimal wavelength for the detection and quantification of MDPV, UV spectra were determined for MDPV (100 µM in HBSS (+/+)), the selected mobile phase and HBSS (+/+) using a UH5300 spectrophotometer.
Method Validation
The Food Drug Administration (FDA) and the International Conference on Harmonization (ICH) guidelines were followed to validate this method [27,28]. The parameters evaluated were specificity, linearity, accuracy, inter-and intraday precision, LOQ, and LOD. The stability of the analytes was also evaluated. Results were analyzed using the GraphPad Prism Software 9.0 and Microsoft Excel.
Specificity
Twenty blank samples (HBSS (+/+)) were injected into the UHPLC to detect the presence of potential co-eluting peaks that could affect the analysis by the tested method.
Linearity
To evaluate linearity, calibrators with concentrations ranging from 0.5 to 500 µM of MDPV (0.5, 1, 5, 10, 25, 50, 100, 250, and 500 µM) were independently prepared on five different days and injected into the UHPLC to obtain a calibration curve each day. The plot of peak area vs. concentration was analyzed by linear regression to obtain the r 2 .
Accuracy
The accuracy of an analytical method represents the deviation between the experimental concentration and the expected (nominal) concentration as calculated by the following equation [27]: For the evaluation of this parameter, three different calibrators within the linear concentration range (6 (low), 40 (medium), and 300 (high) µM) were selected. Five independent solutions were prepared and injected in triplicate for each concentration. Furthermore, a calibration curve was injected and used to determine the experimental concentration of each calibrator.
Precision
The precision of the methodology shows the proximity between a series of data acquired from several injections under the same conditions. The repeatability of the method can be determined for a short time interval (intra-day precision), or within different days (inter-day precision) [28]. Three calibrators, with concentrations within the linear concentration range (10 (low), 100 (medium) and 500 (high) µM), were selected for the evaluation of this parameter.
The inter-day precision of the equipment was evaluated by the preparation and injection in triplicate of the three calibrators in five consecutive days. For the determination of inter-day precision of the analytical method, the three calibrators were independently prepared and injected in triplicate on five consecutive days.
The intra-day precision of the equipment was evaluated by preparing and injecting five times the three calibrators on the same day. Lastly, for the determination of the intraday precision of the analytical method, five solutions of the three calibrators were prepared independently and injected in triplicate on the same day.
Precision was expressed as the CV of each calibrator.
LOD and LOQ
LOD and LOQ were calculated using the following equations [28]: where σ is the standard deviation of the analytical signal of 20 blank samples and S the slope of the calibration curve.
Stability
To evaluate stability, three different calibrators within the linear concentration range (10 (low), 100 (medium), and 500 (high) µM) were prepared and injected into the UHPLC. The remaining volume of each calibrator was divided into several vials and further stored at different temperatures (room temperature, 4 • C, −20 • C, and −80 • C) for 6 weeks (the same storage time of the samples obtained from the permeability assay with Caco-2 cells). After the storage period, samples were injected in duplicate and the obtained analyte peak area for each calibrator in each temperature was compared with the peak area obtained on day 0. Percentages of peak variation were calculated for each concentration.
Caco-2 Cell Culture
The Caco-2 cell line was acquired from the European Collection of Cell Culture (ECACC, UK) and routinely maintained in DMEM supplemented with 10% FBS, 1% penicillin/streptomycin solution, and 1% non-essential amino acids at 37 • C in a humidified atmosphere of 5% CO 2 with the medium changed every two days. Subcultures were obtained by trypsinization with a 0.25% trypsin/EDTA solution. For all assays, cells were used between the 14th and 20th passages.
Cell Viability Assay
MDPV cytotoxicity for the Caco-2 cell line was evaluated through the NR assay as previously described [38]. This method provides an indication of the ability of lysosomes from viable cells to incorporate the NR dye. Caco-2 cells were seeded onto 96-well plates using a density of 60,000 cells/cm 2 to obtain confluent monolayers at the experimental day. The cells were incubated with the racemate of MDPV (0, 50, 150, 300, 500, 750, 1000, 1250, and 1500 µM) in fresh cell culture medium for 24 h. After the selected time interval, the cell culture medium was removed and replaced by a 50 µg/mL NR solution in HBSS (+/+) for 90 min, after which the NR solution was discarded, a lysis solution (50% EtOH/1% glacial acetic acid solution) was added, and the absorbance was read at 540 nm in a 96-well plate reader (PowerWaveX; Bio-Tek, Winooski, VT, USA). Additionally, 1% Triton X-100 was used as the positive control. Data were expressed as the percentage of cell viability relative to untreated cells. Data were obtained from four independent experiments performed in triplicate.
Permeability Assay
For the in vitro permeability assay, Caco-2 cells were seeded on polycarbonate transwell inserts (12-well, 0.4 µm pores, Corning) using a density of 120,000 cells/cm 2 and cultivated for twenty-one days to allow the development of a differentiated monolayer. TEER was measured to control the integrity of the monolayers for twenty-one days.
After twenty-one days, the cells were incubated with 300 µM of each enantiomer of MDPV (in HBSS (+/+)). The assay was performed for both AP to BL and BL to AP directions. For AP to BL, exposure to the enantiomers was performed in the AP compartment (450 µL) while for BL to AP, exposure was made in the BL compartment (1250 µL). Right after the exposure (time 0), 50 µL was collected from each donor compartment (where exposure was performed). At suitable time intervals (40, 60, 90, 120, 180, 240, and 300 min), 600 µL (for AP to BL) and 200 µL (for BL to AL) of samples was collected from the BL and AP compartments (receiver compartments), respectively, and the same amount of HBSS (+/+) was added. In the last time point (300 min), 50 µL was also collected from the donor compartment. Collected samples were stored at −80 • C until the day of the UHPLC analysis. After 6 weeks of storage, samples were filtered with a 13 mm nylon syringe filter with 0.22 µM pore size from Olimpeak™ and quantification was performed by the validated UHPLC-UV method. Results were obtained for three experiments, performed in duplicate.
Mass Balance
Mass balance was calculated using the following equation [31]: where Q R,T is the total cumulative quantity in the receiver chamber (nmol), Q D,f is the final quantity in the donor chamber (nmol), and Q D,0 is the initial quantity in the donor chamber (nmol).
Permeability Coefficent (P app )
To calculate P app , first, a determination had to be made to determine whether the results were under sink or non-sink conditions. Sink conditions are considered if the ratio receiver concentration/donor concentration (C R /C D ) at each sampling point is less than 10%. If the ratio is higher, P app has to be calculated considering a non-sink analysis [39]. In this work, we found that the results from the AP to BL direction were under sink conditions while the results from the BL to AP direction were under non-sink conditions ( Figure S1).
For sink conditions, P app was calculated in cm/s using the following equation [24]: where ∆Q/∆t is the amount of compound over time (mol/s), A is the surface area of the monolayer (cm 2 ), and C 0 is the initial drug concentration on the donor side (mol/mL). For non-sink conditions, a continuous change of the donor and receiver concentrations is considered, and the following equation must be used for each time interval to calculate the theoretical concentration at the receiver side: where C R(t) is the theoretical concentration in the receiver side at time t (µM), Q tot is the total amount of drug in both chambers at time t (nmol), V R and V D are the volumes in the receiver and donor compartments (mL), respectively, C R(t-1) is the concentration in the receiver chamber at the previous time (µM), f is the dilution factor for the sample replacement, A is the surface area of the monolayer (cm 2 ), ∆t is the time interval (s), and P is an initial approximation of the permeability coefficient calculated through Equation (6) (cm/s). P app values are determined by the minimization of the Sum of Squared Residuals (SSR): where C R(t)theor is the theoretical concentration in the receiver side at time t calculated through Equation (7) and C R(t)exp is the experimental concentration in the receiver side at time t obtained directly from the quantification of the samples. A more in-depth explanation of sink/non-sink conditions and the application of these equations can be found in Tavelin et al. [39].
Efflux Ratio
The efflux ratio, ratio between the permeability coefficients obtained for each direction, was calculated using the following equation [31]: E f f lux ratio = P app (BL → AP) P app (AP → BL) (9)
Statistical Analysis
All statistical calculations were performed using GraphPad Prism 9.0 for Windows (GraphPad Software, San Diego, CA, USA) and Microsoft Excel. Kolmogorov-Smirnov and Shapiro-Wilk normality tests were used to evaluate the normality of the data distribution. For the cytotoxicity studies, the statistical comparisons were performed using one-way ANOVA, followed by Holm-Sidak's multiple comparisons test. For the permeability experiment, one/two-way ANOVA was used to make statistical comparisons, followed by Holm-Sidak's/Tukey's multiple comparisons test. Differences were considered significant for p values lower than 0.05.
Conclusions
An UHPLC-UV method was successfully validated for the detection and quantification of MDPV in the transport buffer of this study, HBSS (+/+). The results showed that the enantiomers of MDPV were highly permeable across the Caco-2 monolayer. Enantioselectivity was found between the enantiomers in the P app values obtained for the BL to AP direction, suggesting the involvement of a transport protein. Efflux ratios indicate that a facilitated diffusion mechanism should be excepted for the efflux of the enantiomers of MDPV. To the best of our knowledge, this is the first study related to the permeability of MDPV enantiomers across the intestinal epithelial cell monolayer. More studies with other synthetic cathinones and their enantiomers should be performed. | 2023-02-02T16:21:00.055Z | 2023-01-31T00:00:00.000 | {
"year": 2023,
"sha1": "0199315b5a8f7ddbccbe9858745394220413e77d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/3/2680/pdf?version=1675148891",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2507d61373006845af720b1c3a0c111f5a3f6a0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4136853 | pes2o/s2orc | v3-fos-license | Cancer Stem Cell-Like Side Population Cells in Clear Cell Renal Cell Carcinoma Cell Line 769P
Although cancers are widely considered to be maintained by stem cells, the existence of stem cells in renal cell carcinoma (RCC) has seldom been reported, in part due to the lack of unique surface markers. We here identified cancer stem cell-like cells with side population (SP) phenotype in five human RCC cell lines. Flow cytometry analysis revealed that 769P, a human clear cell RCC cell line, contained the largest amount of SP cells as compared with other four cell lines. These 769P SP cells possessed characteristics of proliferation, self-renewal, and differentiation, as well as strong resistance to chemotherapy and radiotherapy that were possibly related to the ABCB1 transporter. In vivo experiments with serial tumor transplantation in mice also showed that 769P SP cells formed tumors in NOD/SCID mice. Taken together, these results indicate that 769P SP cells have the properties of cancer stem cells, which may play important roles in tumorigenesis and therapy-resistance of RCC.
Introduction
Renal cancer is an important health problem, causing over 15,000 deaths in North America annually. Renal cancer with metastasis or at advanced stage in adults is resistant to conventional chemotherapeutic drugs [1]. Elucidating the genesis of this cancer will help the early diagnosis and treatment, thereby improving the prognosis.
Solid tumors are composed of diverse types of cells with different capacity of proliferation. Only a small population of these cells can form tumors in immunodeficient mice [2]. This observation has led to the concept of cancer stem cells (CSCs), so-called tumor-initiating cells or stem-like cancer cells [3,4,5,6], which have been thought capable of proliferating, self-renewing, and differentiating into multiple lineages, thereby playing an essential role in both development and treatment of tumors [2,3]. Although CSCs have been isolated from several types of human tumors, including hematologic cancers [7], ovarian cancer [8], prostate cancer [9], breast cancer [10], and brain tumors [11], the lack of CSC-specific cell surface antigen markers has bounded further investigation on this topic [12]. Side population (SP) is a flow cytometry (FCM) term to define cell clusters with strong ability to efflux DNA dye Hoechst 33342 via ABC transporters. Side population cells disappear upon treatment with either calcium channel blockers or inhibitors of ABC transporters, such as verapamil and rapamycin [13].This activity leads to the ''side'' (low fluorescence) phenotype of the population and is believed to be a fundamental self-protective function and thus a universal hallmark of stem cells [14,15]. Since it was first introduced by Goodell et al. in 1996 [16], SP cells have been widely reported to be enriched in various cancerous tissues such as breast cancer [17], gastrointestinal system tumor [18], and small-cell lung cancer [19] and from cell lines such as nasopharyngeal carcinoma [20], hepatocellular carcinoma [21], and bladder cancer cell lines [22]. SP cells, with stemness potentials, can form xenograft tumors in animals and are resistant to chemotherapy and radiotherapy, contributing to tumor relapse [23].
RCC, the third most common cancer of the urinary tract, accounts for approximately 3% of all human malignancies. RCCs are classified as clear cell, papillary, chromophobe, collecting duct, and unclassified RCC, with clear cell RCC (CCRCC) as the most prevalent type. That accounts for 82% of RCCs. The treatment of metastatic CCRCC remains to be a major challenge for clinicians and causes approximately 35% of RCC-related mortality [24]. RCC cases have been increasing steadily for decades [25]. Furthermore, most patients already have either metastatic disease at the initial diagnosis or distant metastases after primary tumor resection [26]. The prognosis of RCC is poor partly due to the resistance of metastatic RCC to most current therapies, such as chemotherapy and radiotherapy. Targeted therapy against CSCs may bring new hope for improving prognosis of patients with RCC.
Although significant progress has been made in SP research, the role of SP cells in RCC remains to be fully determined [27,28,29,30]. Addla et al. [29] have reported that both normal and malignant renal epithelial cells contained a proportion of SP cells which were enrich with some stem cell-like properties. More recently, Nishizawa et al. [30] have found that SP cells derived from RCC cells showed higher tumor-initiating ability than NSP cells. Therefore, we hypothesized that SP cells are an enriched fraction of cancer stem cells.
The present study was undertaken to identify the SP cells from established human RCC cell lines and to determine their characteristics and roles in tumorigenesis and treatment of RCC. Here, we isolated SP cells from 769P cells, a human CCRCC cell line, by Hoechst staining and flow cytometry. Our in vitro and in vivo experiments demonstrated that SP cells possessed the wellknown CSC characteristics of proliferation, self-renewal, and differentiation, as well as strong resistance to chemotherapy and radiotherapy that were possibly related to the ABCB1 transporter. These findings may provide new insights for future CSC research and clinical anti-cancer therapy.
Side Population Analysis and Cell Sorting
Side population analysis and cell sorting were performed as described previously by Goodell et al. [16] with modification. Briefly, cells were trypsinized, suspended at 1 6 10 6 cells/mL in pre-warmed RPMI-1640 containing 2% FBS and 10 mmol/L HEPES (Gibco), then incubated with 5 mg/mL Hoechst 33342 (Sigma, St. Louis, MO, USA) either alone or in combination with 50 mmol/L verapamil (Sigma), an ABC transporter inhibitor, in dark for 90 min in the 37uC water bath with intermittent mixing. At the end of staining, cells were spun down and resuspended in cold HBSS (Gibco) containing 2% FBS and 10 mmol/L HEPES. FCM analysis and cell sorting were then carried out directly on EPICS ALTRA Flow Cytosorter (Beckman Coulter, Fullerton, CA, USA). Hoechst 33342 was excited with 100 mW UV laser and was detected with 450 BP filter for blue fluorescence and 675 BP filter for red fluorescence. A 610-nm dichroic mirror short-pass (DMSP) filter was used to separate the emission wavelengths. A polygonal live gate in FS-HO blue plot was created to exclude debris and dead cells. SP cells and non-SP (NSP) cells were sorted for the following assays.
Clone Formation and Long-term Differentiation Assays
Under the autoclone sorting mode, every 500 769P cells of SP or NSP phenotype were sorted directly into a 6-cm culture dish, and cultured with RPMI-1640 complete culture medium for 10 days. After most cell clones had expanded to more than 50 cells, they were washed twice with PBS, fixed in 75% methanol for 15 min, and stained with crystal violet for 15 min at room temperature. After incubation, dishes were rinsed and the number of clones that contained more than 50 cells was counted under a phase contrast microscope. The clone formation efficiency was the ratio of the number of clones to the number of seeded cells. Clone formation assays were repeated in triplicate.
The long-term differentiation assay was performed 10 days after cell sorting, according to the protocol of side population analysis, to determine the differentiation ability of SP and NSP cells.
Detecting mRNA Expression of ABC Family Members in Sorted 769P SP Cells by RT-PCR
Total RNA was extracted from SP and NSP cells separately using Trizol reagent (Invitrogen, San Diego, CA). The expression of ABCB1, ABCC1, and ABCG2 was detected by using the PrimeScript TM RT-PCR Kit (Takara, Otsu, Japan) according to the manufacturer's instructions. The primers (Table 1) were designed and synthesized by Invitrogen. GAPDH was used as an internal reference. The PCR conditions were denaturation at 94uC for 4 min followed by 40 cycles of annealing at 94uC for 45 s, 58uC for 30 s, and 72uC for 45 s, with elongation at 72uC for 8 min. The PCR products were analyzed by electrophoresis on 1.5% agarose for the mRNA expression of ABC family members.
Detecting Protein Expression of ABC Family Members in Sorted 769P SP Cells by Western Blotting
Total proteins were extracted from SP and NSP cells separately and denatured in sodium dodecyl sulfate (SDS) sample buffer, then equally loaded onto 8% polyacrylamide gel. After electrophoresis, the proteins were transferred onto a polyvinylidene difluoride membrane. Blots were incubated with indicated primary antibodies overnight at 4uC, then incubated with horseradish peroxidaseconjugated secondary antibody, and were finally detected using enhanced chemiluminescence Western blotting detection reagents (GE healthcare, UK). The mouse ABCG2 (at 1:1,000 dilution), ABCB1 (1:1,000), and ABCC1 (1:5,000) monoclonal antibodies from Abcam Inc. (Cambridge, UK) as well as anti-GAPDH (1:2,000) from Santa Cruz Biotechnology (Santa Cruz, CA, USA) were used to determine relative protein levels.
Radiation and Drug Sensitivity Assays
To determine the sensitivity of cells to radiation, freshly sorted SP and NSP cells were seeded in 6-well plates (500 cells/well), and were exposed to 5 Gy of X-ray (500 cGy/min, using a 12 cm 6 6 cm irradiation field) with or without 30-minute pre-incubation with verapamil (50 mmol/L) the day after sorting. When most cell clones had more than 50 cells, they were stained with crystal violet to determine the number of survival cells.
To determine the sensitivity of cells to drugs, SP and NSP cells were seeded in 96-well plates (500 cells/well) and cultured with mitoxantrone (MTX, a topoisomerase II inhibitor antineoplastic agent, Sigma), 5-fluorouracil (5-FU, Sigma), or sunitinib (a tyrosine kinase inhibitor, Sutent, Pfizer, New York, USA) the following day in a concentration gradient, with or without 30minute pre-incubation with 50 mmol/mL verapamil as a chemosensitizer to mitoxantrone. Untreated cells were used as control. Four parallel wells were set for each group. After 3 days, the absorbance of each well at a wavelength of 570 nm (A 570 ) was measured. Cell survival rate was calculated using the formula: survival rate = (mean A 570 of the test wells/mean A 570 of the control wells) 6 100%. Inhibition rate was calculated using the formula: inhibition rate = 100% -survival rate.
Xenograft Tumor Formation Assay
Animal experiments were performed in strict accordance with the Guide for the Care and Use of Laboratory Animals of Sun Yat-sen University. The protocol was approved by the Committee on the Ethics of Animal Experiments of the First Affiliated Hospital of Sun Yat-sen University. A total of 54 5-to 7-week-old nonobese diabetic (NOD)/severe combined immunodeficient (SCID) female mice were obtained from the Experimental Animal Center of Sun Yat-sen University. The mice were divided into 6 groups, with 9 mice in each group. Indicated amount of freshly sorted SP and NSP cells were suspended in 200 mL PBS, separately, and inoculated subcutaneously into the axillary fossa of NOD/SCID mice immediately after the sorting. The mice were monitored twice per week for the formation of palpable tumors. At 6 weeks after inoculation, the mice were euthanized to assess tumor formation. Tumors were measured using a Vernier caliper, weighed, and photographed. A portion of every subcutaneous tumor was collected, fixed in 10% formaldehyde, and embedded in paraffin for pathological assessment after H&E staining. The other portion of every tumor was dissociated into single cell suspension, which was prepared as described previously [34] with minor modifications. Briefly, tumor tissue was manually dissociated into ,0.5 mm fragments and all visible clumps were removed, then digested with 1 mg/mL collagenase type II (Sigma) and 1.2 mg/mL Dispase (Sigma) for 45 to 90 min at 37uC. Occasionally, 0.2 mg/mL trypsin (Invitrogen) was used for 10 min to ensure dissociation into single cells. Cells were filtered through consecutive 70-mm cell strainers to remove remaining clumps. Collected cells were suspended in PBS supplemented with 1% FCS. At least 100,000 harvested cells were stained for SP analysis. Twenty 5-to 6-week-old SDIC female mice were divided equally into 4 groups for serial transplantation assay. Every 5,000 cells dissociated from a xenograft tumor, which was formed with 2,000 SP cells, 20,000 SP cells, 20,000 NSP cells, or 200,000 NSP cells, were re-inoculated into a NOD/SCID mouse. Tumor formation assessment and pathological examination were performed 6 weeks later.
Statistical Analysis
Data are expressed as the mean 6 standard deviation (SD) from at least three independent experiments. Microsoft Office Excel 2007 and SPSS13.0 softwares were used for data processing. Statistical significance was determined with Student's t test. A P value ,0.05 was considered significant.
Existence of SP Cells in RCC Cell Lines
Five RCC cell lines were analyzed for their SP phenotypes. The R2 gate shows that the percentage of SP cells, with dim Hoechst 33342 fluorescence, dropped from 4.82% among 769P cells without verapamil treatment to 0.02% among 769P cells with verapamil pre-incubation (Fig. 1A). Retesting of sorted cells demonstrated a purity of 96.61% for SP cells and 99.89% for NSP cells (Fig. 1B). For the other four RCC cell lines, the ratios of SP cells in 786-O and OS-RC-2 cells were 0.1% and 0.2%, which were too low for the following experiments; no SP cells were detected among SN12C and SKRC39 cells (Fig. S1). Therefore, SP and NSP cells were sorted from 769P cells for subsequent experiments.
Clone Formation and Differentiation of Sorted 769P SP and NSP Cells
After 7 days of culture, most clones contained more than 50 cells. We counted the number of clones and found that the mean clone formation efficiency of SP cells was significantly higher than that of NSP cells [(56.461.3)% vs. (22.761.5)%, P,0.001; Fig. 2A).
After 10 days of culture, most sorted 769P SP cells differentiated into NSP cells, whereas only a small proportion of SP cells were detected among sorted 769P NSP cells (Fig. 2B), suggesting the ability of SP cells to self-renew and differentiate into NSP cells.
Expression of ABC Family Members in Sorted 769P SP and NSP Cells
The expression of ABCB1, ABCG2, and ABCC1 was detected by RT-PCR and Western blotting. Both experiments showed that ABCB1 was expressed at a high level in sorted SP cells but at a quite low level in sorted NSP cells, whereas ABCC1 and ABCG2 were undetectable in either sorted SP or NSP cells (Fig. 3).
Sensitivity of Sorted 769P SP and NSP Cells to Radiation and Drugs
We measured the sensitivity of sorted 769P SP and NSP cells to radiation by clone formation assay. The clone formation efficiency of SP cells was significantly higher than that of NSP cells either before or after irradiation (P,0.05; Fig. 4A). In detail, the clone formation efficiency of SP cells did not change remarkably after radiation (P.0.05), whereas that of NSP cells decreased dramatically (P,0.05), suggesting that SP cells were more resistant to Xray damage than NSP cells.
We also measured the sensitivity of sorted 769P SP and NSP cells to MTX, 5-FU, and sunitinib. SP cells showed strong resistance to MTX, whereas NSP cells were sensitive to MTX (P,0.001). Verapamil, an ABC transporter inhibitor, enhanced the inhibitory effect of MIX on SP cells, with a proliferation inhibition rate similar to that of NSP cells under the same conditions (P.0.05); however, verapamil failed to enhance the effect of MIX on NSP cells (P.0.05) (Fig. 4B). We also found that SP cells were much more resistant to 5-FU than NSP cells (P,0.05; Fig. 4C), but their sensitivities to sunitinib were similar (P.0.05; Fig. 4D).
Tumor Formation of Sorted 769P SP and NSP Cells in NOD/SCID Mice
Sorted 769P SP and NSP cells were inoculated into NOD/ SCID mice to observe their ability to form tumors. One mouse that was inoculated with 20,000 NSP cells and 1 with 200,000 NSP cells died after inoculation. At 6 weeks after cell inoculation, 2 . Sensitivity of sorted 769P SP and NSP cells to radiation and chemotherapeutic drugs. A, the clone formation efficiency of sorted 769P SP cells is significantly higher than that of NSP cells either before or after 5 Gy of X-ray irradiation. The clone formation efficiency of SP cells is significantly higher than that of NSP cells after irradiation (P,0.01); the clone formation efficiency of unirradiated NSP cells is significantly higher than that of irradiated NSP cells (P,0.05). B, newly sorted 769P SP cells are more resistant to mitoxantrone than NSP cells (P,0.01), whereas this resistance is reversed with verapamil pretreatment. C, SP cells are also more resistant to 5-fluorouracil than NSP cells (P,0.05). The resistance could also be reversed with verapamil pretreatment. D, the sensitivity of SP cells to sunitinib is similar to that of NSP cells (P.0.05). doi:10.1371/journal.pone.0068293.g004 mice developed tumors with only 200 SP cells, whereas the lowest amount of NSP cells to form tumors was 20,000 cells ( Fig. 5A; Table 2). Pathological examination confirmed that the tumors formed with SP and NSP cells showed the same characteristics as typical RCC cells just like unsorted 769P cells (Fig. 5B).
To confirm the postulated role of SP cells in tumor formation, a tumor formed with 200 SP cells and a tumor formed with 20,000 NSP cells were dissociated into cell suspension respectively and were stained with Hoechst 33342. FCM analysis showed that the proportion of SP cells was higher in SP cell-formed tumor than in NSP cell-formed tumor (Fig. 6), suggesting the in vivo self-renewal and NSP-differentiation of SP cells.
To further determine the tumor formation ability of secondgeneration SP cells, we inoculated 5,000 SP cells (from 2 tumors formed with 20,000 and 2,000 SP cells, respectively) or 5,000 NSP cells (from 2 tumors formed with 200,000 and 20,000 NSP cells, respectively) into NOD/SCID mice. All mice developed tumors 6 weeks after inoculation. The second-generation NSP tumors were significantly smaller and lighter than the second-generation SP tumors (P,0.01; Fig. 7A and B). All second-generation tumors were histologically identical to primary xenograft tumors (Fig. 7C). These results indicated that SP cells were capable of generating tumors serially.
Discussion
In recent years, researches on CSCs in solid tumors have shown remarkable findings. In this study, we isolated SP cells from Table 2. The tumor formation ability of sorted 769P side population (SP) and non-side population (NSP) cells in NOD/SCID mice. human clear cell RCC cell lines to determine biological properties of this cell population. We found that 4.8% of RCC 769P cells were SP cells, which showed the ability of self-renewal and multi-lineage differentiation. A clone formation efficiency of SP cells higher than that of NSP cells was observed. In addition, SP cells showed tumor formation ability at least 100 times higher than that of NSP cells. Tumors were formed in NOD/SCID mice which were inoculated with only 200 freshly sorted SP cells, whereas at least 20,000 NSP cells were required to form tumors in mice. These results provide direct evidence for the high tumorigenicity of SP cells.
Self-renewal and multi-lineage differentiation capacities are hallmarks of stem cells. Serial transplantation of cells in animal models, although imperfect, can help to evaluate the stability of SP cells in biological behaviors. Our SP re-sorting analysis after serial transplantation of SP cells in mice showed that second-generation SP cells derived from SP cell xenograft tumors maintained the ability of self-renewal and multi-lineage differentiation. Furthermore, cells from both SP and NSP xenograft tumors maintained the capacity to form tumors in mice, but second-generation SP tumors were significantly heavier than second-generation NSP tumors, suggesting that 769P SP cells are more tumorigenic than NSP cells.
Cancer stem cell is considered able to undergo an asymmetrical self-renewing cell division, dividing into one stem cell and one progenitor cell, which could generate a variety of more differentiated functional cells that comprise of the whole tumor society [35]. In our study, the purity of sorted 769P SP cells was 96.61% and that of sorted NSP cells was 99.89%. It is possible that few SP cells may have attached to NSP cells and therefore been collected together. After 10-day culture, the sorted SP cells developed into a community containing 19.51% SP cells and about 80% NSP cells, whereas the sorted NSP cells developed into a community containing only 2.39% SP cells that may arise from few sneaking-in SP cells during sorting. NSP cells can form a few smaller colonies and also regenerated tumors although the tumors were smaller. Taken the findings from long-term differentiation and serial transplantation experiments together, it strongly suggested that SP cells undergo asymmetrical division and are capable of differentiating into NSP cells and forming the bulk of tumor. The capacities of NSP cells to form colonies and tumors, which were much weaker than the capacities of SP cells, may be explained by the small proportion of sneaking-in SP cells among sorted NSP cells.
The SP phenotype is determined by the statuses of ABC transporters ABCB1, ABCC1-5, and ABCG2 [36]. Thus, SP cells are sorted through measuring the rapid efflux of lipophilic fluorescent dyes by ABC transporters [16]. Most previous studies have focused on the function of ABCG2, while the necessity of the pump function of ABCB1 in stem cell expansion was under debate [37]. In our study, the expression of ABCB1 was high in SP cells but low in NSP cells as detected by both RT-PCR and Western blotting. Interestingly, neither ABCC1 nor ABCG2 was expressed in SP and NSP cells, suggesting that only ABCB1 contributes to the function of SP phenotype in 769P cells.
According to the CSC theory, CSCs in solid tumors are resistant to chemotherapy and radiotherapy, and resident CSCs that survived treatment may reform tumors [38]. We conducted chemosensitivity and radiosensitivity assays to compare the sensitivity of SP and NSP cells to conventional therapies. We . Tumor formation ability of second-generation SP cells in NOD/SCID mice and pathological examination with H&E staining. A, NOD/SCID mice were inoculated with 5,000 second-generation SP cells from a tumor formed with 2,000 SP cells or with 5,000 secondgeneration NSP cells from a tumor formed with 200,000 NSP cells. Tumors were removed from the mice 6 weeks after inoculation. B, the secondgeneration SP tumors are significantly heavier than second-generation NSP tumors (** P,0.01). C, Pathological examination shows that both the second-generation SP tumor and the second-generation NSP tumor are histologically identical to primary xenograft tumors. doi:10.1371/journal.pone.0068293.g007 found that SP cells were more resistant to radiation than NSP cells. In addition, SP cells were more resistant to MIX and 5-FU than NSP cells, indicating that SP cells are widely resistant to conventional chemotherapeutic drugs. However, this drug resistance could be reversed by pretreatment with verapamil, an ABC transporter inhibitor, suggesting that ABC transporters may be responsible for drug resistance [36,37,39].
In conclusion, our study proved that SP cells isolated from the RCC 769P cell line possess stem cell characteristics through both in vitro and in vivo experiments. These SP cells are characterized by strong proliferation potential, self-renewal, differentiation, resistance to chemotherapy and radiation, and in vivo tumor formation ability. ABCB1 has been found to contribute to the function of SP cells. Hopefully, developing a therapy targeting this cell population will help to improve the prognosis of RCC. | 2018-04-03T05:47:31.532Z | 2013-07-11T00:00:00.000 | {
"year": 2013,
"sha1": "ad331d782599db158f8506221507b682cc0b2051",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068293&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad331d782599db158f8506221507b682cc0b2051",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17652429 | pes2o/s2orc | v3-fos-license | BMC Musculoskeletal Disorders BioMed Central Research article Reliability of Ashworth and Modified Ashworth Scales in Children with Spastic Cerebral Palsy
Background: Measurement of spasticity is a difficult and unresolved problem, partly due to its complexity and the fact that there are many factors involved. In the assessment of spasticity in the pediatric disabled population, methods that are easily used in practice are ordinal scales that still lack reliability. A prospective cross-sectional observational study was planned to determine the reliability of the Ashworth Scale (AS) and the Modified Ashworth Scale (MAS) in children with spastic cerebral palsy (CP).
Background
Spasticity is one feature of an upper motor neurone syn-drome that may affect functionality, limit daily living activities and diminish quality of life in children with spastic Cerebral Palsy (CP) [1][2][3][4][5]. The assessment of spasticity is important in order to determine effectiveness of treatment on spasticity and to plan medical or surgery applications and also to measure the regulation of tonus, to decide on physiotherapy goals, and to encourage the children and their families.
However the measurement of spasticity is a difficult and unresolved problem, partly due to its complexity and the fact that there are many factors involved [6]. There are many different assessment methods for spasticity varying from clinical ordinal scales to complex electrical or orthotic equipments.
Electrophysiologic tests, electromyography, dynamic flexiometer, spasticity measurement system, pendulum test and isokinetic dynamometer are all fine examples from published literature although these methods are limited for clinical use. They are mostly used for research studies and it is hard to elicit cooperation in children [2,[6][7][8][9][10][11]. In the assessment of spasticity, methods that are easily used in practice are; measuring the resistance of spastic muscles to quantify muscle tone such as the Ashworth Scales (AS), the Modified Ashworth Scales (MAS), the Tardieu Scale and the Modified Tardieu Scale (MTS). The Ashworth Scale and MAS measure spasticity and are applied manually to determine the resistance of muscle to passive stretching ( Table I). The Tardiue and Modified Tardieu Scales are measured at 3 different velocities (V1, V2, and V3). By moving the limb at different velocities, the response to stretch can be more easily gauged since the stretch reflex responds differently to velocity. [8,9,[12][13][14][15][16][17][18][19]. The AS, MAS, Tardieu and Modified Tardieu Scales are commonly used in children with CP [20,21].
The application of ordinal scales indicates that they still lack reliability and have some limitations in measuring spasticity. The scales offer qualitative and subjective information, concerning validity and reliability [9,22,23].
The AS and MAS need no equipment; they are easily and commonly used in the clinic [2,8,9,[24][25][26]. However, these scales have some disadvantages because they are not standardized, stimulus is not well controlled, and also they have no reliability and validity for all muscle groups. They are not easily used statistically as they include numerical values [2,3,8,9,16,27].
In the study conducted by Bohannon and Smith, the reliability of AS in elbow flexors in patients with stroke was assessed and found reliable [8]. The reliability of AS ve MAS is better in the upper limb. The reliability of lower extremities has controversial results and has demonstrated low reliability in children with spastic CP in a few studies published [2,16,27]. Clapton et al. investigated the interrater and intrarater reliability of MAS in elbow flexors, hip adductors, quadriceps, hamstrings, gastrocnemius and soleus of 17 children with hypertonus. Elbow flexors and hamstrings had good ICC values of interrater reliability while poor interrater reliability in other muscles was observed. Hamstrings had good intrarater reliability while the other muscles had moderate reliability [28]. Fosang et al. investigated reliability of MAS, passive range of motion (PROM) and MTS in 16 children with CP. All measurements were repeated twice by six raters. The interrater reliability for PROM and MTS provided acceptable intra class correlation coefficient values, but the results for MAS were lower [19]. In the studies analyzing the reliability of AS, Sehgal reported that AS had a limited and low reliability. Pandyan et al. found that interrater reliability of AS should be addressed and Brashear et al found "good" inter and interrater reliability results of AS in patients with stroke [22,29,30]. Yam and Leung investigated the reliability of MAS and MTS in children with spastic CP. The intraclass correlation coefficients of both scales were low and did not reach the acceptable limit of 0.75. Caution should be used when these scales are applied [31].
AS and MAS are common to clinical practice and are frequently used. As the reliability of both scales are not definite and there are few studies on younger children, we planned to conduct this study. There is no study in the published literature investigating the reliability of AS and MAS together in younger children with CP. The purpose of our study was to assess the intra and interrater reliability of the AS and MAS, and to examine the reliability of both scales in the lower extremities in children with spastic CP.
Procedure
The study received ethical approval from Hacettepe University Ethics Committee and all parents of the children were informed about the study and their consent was obtained. A prospective cross-sectional observational study was conducted on the lower limbs of 38 spastic diplegic children (76 lower limbs in all) whose parents had given consent, and who had the inclusion criteria and were able to complete the study. Eight out of 38 children could not participate in the second assessment session as 3 children displayed anxiety and could not cope with measurement, 5 children were living out of the city and were not able to attend twice. Therefore the intrarater reliability was assessed in 30 children.
The study included 11 girls, 27 boys, a total of 38 children with spastic diplegic CP. The mean age for the children was 52.9 months (SD: 19.6) ranging from 18 to 108 months. The functional level of participants was classified according to the Gross Motor Function Classification System (GMFCS), 20 children with CP were in Level II (52.6%), 18 were in Level III (47.4%) and 9 were in Level I (23.7%) [32]. Level I represents the children who can walk without restrictions but have limitations in more advanced gross motor skills. Level II represents those who can walk without restrictions but have limitations walking outdoors and in the community. Level III represents those who can walk with assistive mobility devices but with limitations in walking outdoors and in the community.
Inclusion criteria for the study were; (i) Spastic diplegic type of CP; (ii) having had no orthopedic surgery, Botulinium toxin injection; (iii) having had no oral or intratheceal myorelaxant drugs; (iv) having had no severe limitations in passive range of motion at lower extremities and (v) having had no mental retardation. Each child was assessed by three physiotherapists in two different sessions a week apart. The intrarater reliability was determined by a paired comparison of the measurements for each therapist between the two assessments. The interrater reliability was determined by a paired comparison of the measurements of the three therapists on the same day.
The full time experience of the participating physiotherapists (A,B,C) was 16, 12, 3 years as well as 14, 8, 3 years in pediatric rehabilitation respectively. All of the measurements were taken in the supine position, the head position was in midline and the resting limb position was neutral except the hip external rotation measurement, taken in the sitting position. The scores for AS and MAS were determined according to the level of resistance during the passive movement of the antagonist muscles [8,9,23]. The muscle groups tested were hip flexors, adductors (knee extended), internal rotators of hip, hamstrings and plantar flexors (knee extended), (Table 1).
A pilot study was performed to reach an agreement among the physiotherapists about the scoring of AS and MAS, the positioning of the patient and also for agreement on speed of movement, number of repetitions of movement per joint, and the order of testing for the muscles in the lower extremities. One repetition was done per joint. The three physiotherapists agreed on an optimum speed. Assessments were performed by the three physiotherapists (A, B, C) in the same order, in a quiet room when the participants were calm and relaxed. The order of testing for the muscles were as follows: hip flexors, adductors, internal rotators, hamstrings and plantar flexors. The physiotherapists tried to perform the assessments without causing any discomfort. Each physiotherapist was assisted by the same fourth physiotherapist who did not perform any measurement and only helped maintain the positions of the subjects and recorded the scores. Assessments were performed and measured only once in the same session due to the nature of spasticity and a 30-minute interval period between the assessments was added in order to eliminate stretch reflexes occurring in the previous measurement and not to affect the following measurements. The interval period between two assessment sessions was 7 days in order not to keep the initial records in mind. Scores from the right and left sides of the body were combined for the same muscle and data from all raters were collected. Participants were assessed by using AS and MAS [8,9].
Statistical analysis
We handled each lower extremity of the child as a seperate case and therefore different results of the right and left leg of a child did not affect each other. The intraclass correlations coefficient (ICC) was used to assess the intra and interrater reliability of AS and MAS. Fleiss and Cohen suggest that ICC is the mathematical equivalent of the weighted Kappa for ordinal data, but it can also assess reliability for more than two raters at a time and for different numbers of raters for each subject [33]. The ICC can be used for ordinal data with equal distance between intervals [34]. MAS and AS scores were considered ordinal and Ashworth Scale 0 No increase in tone 1 Slight increase in tone giving catch when the limb is moved in flexion and extension 2 More marked increase in tone, but limb is easily flexed 3 Considerable increases in tone, passive movement difficult 4 Limb rigid in flexion or extension [17]. Modified Ashworth Scale 0 No increase in muscle tone 1 Slight increase in muscle tone, manifested by a catch and release or by minimal resistance at the end of the range of motion when the affected part(s) is(are) moved in flexion or extension 1+ Slight increase in muscle tone, manifested by a catch followed by minimal resistance through the remainder of the range of motion but the affected part(s) is(are) easily moved. 2 More marked increase in muscle tone through most of the range of movement, but the affected part(s) is easily moved. 3 Considerable increases in muscle tone, passive movement difficult 4 Affected part(s) is (are) rigid in flexion or extension a value of 1.5 for MAS was assigned to ratings of 1+ to maintain equal intervals [22]. The 95% confidence interval (CI) was used to determine the statistical significance. The clinical significance was defined as poor for an ICC below 0.50, moderate for 0.50 to 0.75, and good for 0.75 or higher [34]. The software used for all calculations was SPSS 11.01 for Windows.
Results
The AS and MAS scores of the mean value, the minimum and maximum values of AS and MAS are presented in Table 2. Table 3). Table 3).
Intrarater Reliability of AS
Among three raters, the AS intrarater ICC scores were found to be ranging from poor to good (ICC: 0.31-0.82). The lowest reliability was 0.31 between the adductor measurements of rater C and the highest reliability was 0.82 between the hamstring measurements of rater C. All scores of raters are demonstrated in Table 4.
Intrarater Reliability of MAS
The scores were poor to and good (ICC: 0.36-0.83). The lowest reliability was 0.36 between the hip internal rotator measurements of rater A and the highest reliability was 0.83 between the hip flexor measurements of rater C. The intrarater ICC scores of MAS are demonstrated in Table 5.
Discussion
In the assessment of spasticity in children with spastic CP, a number of ordinal scales such as AS, MAS and Tardieu and MTS are commonly used [20,31,35]. There is no study in the published literature investigating the reliability of AS and MAS together in younger children with CP, therefore we undertook this study. To our knowledge, this is the first study investigating the intra and interrater reliability of AS and MAS in children with spastic CP. AS and MAS measure resistance to passive movement and therefore measure hypertonia [36].
In this study, reliability in hip flexors, adductors, internal rotators, hamstrings and gastrocnemius muscle groups in children with spastic CP were investigated. The interrater reliability scores of both AS and MAS were ranged from moderate to good and the intrarater reliability scores ranged considerably from poor to good.
Various factors may affect the measurement results of reliability. While investigating the reliability of scales, related joints, anatomic and biomechanical characteristics of muscle groups as well as interrater and intrarater change and biological change should be taken into consideration [37]. Priebe et al determined that low reliability results of ordinal scales are related to problems which occur during the measurement of spasticity as well as the environment and general condition of the patient [17].
In order to eliminate these negative factors in our study, an appropriate environment regulation, the comfort of the children, the relaxation of the children, and interval periods between measurements were provided. Besides, due to its nature, spasticity is sensitive to passive stretching and velocity may affect clinical features. As passive stretching is considered to affect the following measurement results, measurements were repeated once on two different days of the study. To minimize the disadvantage of the stretching of the spastic muscle, fast stretching was avoided. The measurement criteria were standardized by a pilot study previously. The physiotherapists performed measurements in the same order and gave breaks between the measurement of the testers in order to avoid the effect of stretching.
In our study, the ICC scores of interrater reliability ranged from 0.54 to 0.80 and the intrarater reliability from 0.31 to 0.82, the gastrocnemius muscle had the lowest value in AS, and the interrater reliability of MAS was between 0.64-0.87, while the intrarater reliability was between 0.41-0.83. It may be that there is a relation to AS and MAS. We were not surprised to see that the inter reliability was higher than the intratester reliability. This confirms that these scales should be interpreted with great caution and indicates that even the same rater has the possibility of making an error. The repetition of measurements by the same physiotherapist, and experience may not affect reliability as we mentioned in the conclusion of our study.
Although the interrater reliability of AS and MAS were similar in our study, the intrarater reliability of MAS had higher scores than the intrarater reliability of AS. This result may arise from the common use of MAS in practice by raters who had experience in pediatric physiotherapy. Fosang stated that MAS had better intrarater reliability compared to interrater reliability and it should only be used by a single rater for the same participant rather than different raters [19]. The interrater reliability was higher than the intrarater reliability of our MAS results when compared to the results of the study conducted by Fosang and Clapton. This may be due to the low number of raters in our study [19,28].
The mean value of MAS was 0.87 for the intertester reliability of adductor muscles and 0.68 for the intertester reliability of plantar flexors. Yam and Leung investigated the interrater reliability of MAS and MTS for hip adductors and plantar flexors in children with spastic CP. Their results showed that the intraclass correlation coefficients of both scales were low and did not reach the acceptable limit of 0.75. We had similar results in the plantar flexors although they were different in the adductors.
Our result for the adductor muscles may be related to the laws of physics. Power and load arm of these muscle groups are longer compared to those of the plantar flexors. In addition, the range of motion of the adductor muscle groups is greater than that of the plantar flexors. These may provide a higher reliability of the adductor muscles.
There are few studies examining the reliability of AS and MAS in one single study, however recent studies have focused on the reliability of MAS [19,28].
There have been studies focusing on the reliability of AS and MAS on the adult population. Haas et al used AS and MAS for assessing lower extremity spasticity in 33 adult paraplegic patients and found AS to be more reliable than MAS [38]. Ansari et al assessed wrist spasticity by AS and MAS in patients with stroke and reported no difference for the interrater reliability between AS and MAS [39]. Reliability scales are also affected from the assessed muscles and personal characteristics of subjects. Ease in manipulation as well as supporting the lower extremities in children and the range of motion capability due to the muscle group which is assessed are the probable characteristics mentioned above [8,22,28]. Also, our sample group consisted of children who are younger than those of most sample groups with CP. Therefore, this could have affected our results. Younger kids would be easier to move due to smaller limbs (especially for the proximal muscles which are addressed briefly) but would be harder to test due to reasons of adherence since they are so young.
Conclusion
Nevertheless, recent studies on this issue may guide future studies. The interrater and intrarater reliability of AS and MAS are related to muscle and joint characters. The repetition of measurements by the same physiotherapist, and experience may not affect reliability. These scales are not very reliable and assessments of spasticity using these scales should be therefore interpreted with great caution. Future research studies are required to analyze factors affecting reliability in children with CP. | 2014-10-01T00:00:00.000Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "27c9d400ce16afffd9775e5c14d1f968e84bd595",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-9-44",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27c9d400ce16afffd9775e5c14d1f968e84bd595",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209529134 | pes2o/s2orc | v3-fos-license | Evolution of our understanding of methylmercury as a health threat.
Methylmercury (MeHg) is recognized as one of the most hazardous environmental pollutants, primarily due to endemic disasters that have occurred repeatedly. A review of the earlier literature on the Minamata outbreak shows how large-scale poisoning occurred and why it could not be prevented. With the repeated occurrences of MeHg poisoning, it gradually became clear that the fetus is much more susceptible to the toxicity of this compound than the adult. Thus, recent epidemiologic studies in several fish-eating populations have focused on the effects of in utero exposure to MeHg. Also, there have been many studies on neurobehavioral effects of in utero exposure to methylmercury in rodents and nonhuman primates. The results of these studies revealed that the effects encompass a wide range of behavioral categories without clear identification of the functional categories distinctively susceptible to MeHg. The overall neurotoxicity of MeHg in humans, nonhuman primates, and rodents appears to have similarities. However, several gaps exist between the human and animal studies. By using the large body of neurotoxicologic data obtained in human populations and filling in such gaps, we can use MeHg as a model agent for developing a specific battery of tests of animal behavior to predict human risks resulting from in utero exposure to other chemicals with unknown neurotoxicity. Approaches developing such a battery are also discussed.
Introduction
Methylmercury is recognized as one of the most hazardous environmental pollutants, largely due to endemic disasters such as Minamata disease in Japan and methylmercury poisoning in Iraq, as well as industrial accidents involving methylmercury compounds. A failure to acknowledge and prevent the disease, however, has caused the disasters to be unnecessarily repeated. For example, the second outbreak of Minamata disease (or Niigata Minamata disease) took place in Niigata, Japan, after the outbreak in Minamata. In Iraq, methylmercury poisonings repeatedly occurred from the distribution of wheat seeds dressed with methylmercury.
This paper is divided into three parts. The first part, an extract from the earlier literature, describes how the toxicity of methylmercury or organic mercury was found and recognized. Because methylmercury brought about tragedies many times in different ways, it is important to trace how the tragedies occurred and why they were not prevented. Thus, the early history of poisonings by industrial organic mercury compounds is reviewed and the early stages of the outbreak of Minamata disease are described.
In the second part, experimental studies are reviewed with an emphasis on investigations of behavioral teratology. Methylmercury has been considered an environmental health threat to the nervous systems of developing fetuses since the exploratory work of Spyker et al. in 1972 (1).
In the final part, we present a comparison of human and animal data with regard to behavioral effects of in utero exposure to methylmercury.
Identification of Methylmercury Toxicity
Methylmerry Recogied as an Industrial Toxicant Organic mercury compounds including methylmercury have been commercially produced since 1930. The use of organic mercury compounds in chemical research, however, goes back to 1863. Organic mercury was first identified as a health hazard in 1866 when two laboratory technicians were poisoned with dimethylmercury (2). A 30-year-old male who had been exposed to dimethylmercury for 3 months "complained of numbness of the hands, deafness, poor vision and sore gums... [He was] unable to stand without support," although no motor palsy was detected. His condition rapidly worsened; he became restless and comatose within a week and died 2 weeks after the onset of symptoms. Another victim was a 23-year-old laboratory technician who had been working in the laboratory for 12 months, although he had handled dimethylmercury for only 2 weeks. He complained of sore gums, salivation, numbness of the feet, hands and tongue, deafness and dimness of vision. He answered questions only very slowly and with indistinct speech... Three weeks later he had difficulty in swallowing and was unable to speak... [He] was often restless and violent. He remained in a confused state and died of pneumonia 12 months after the onset of symptoms (2). Most of the signs and symptoms described above resemble those observed in (acute) Minamata disease. Sore gums and salivation were, however, symptoms observed in mercury vapor poisoning. Since dimethylmercury can easily be broken down to produce metallic mercury, it is considered that these symptoms were due to co-existing metallic mercury.
The therapeutic use of diethylmercury against syphilis was tried in Germany in 1887 but was readily abandoned because of extremely high toxicity. Animal experiments showed involvement of the nervous Environmental Health Perspectives * Vol 104, Supplement 2 -Aprl 1996 system. "Incoordination was noticed, especially in rabbits, and motor paralysis was observed in dogs and cats. Tremors, blindness, loss of the sense of smell, deafness, and attacks ofwrath on the slightest provocation were observed in many of the dogs" (2). The toxicity of alkylmercury compounds had therefore already been recognized in the 19th century.
Although there had been accidental cases of mercury poisoning and related findings in experimental studies as mentioned above, organic mercury compounds such as aryl and alkyl derivatives continued to be used for seed dressing. "In 1940, Hunter reported four cases of methylmercury poisoning in a factory where fungicidal dusts were manufactured without an enclosed apparatus... [The symptoms were] severe generalized ataxia, dysarthria and constriction of the visual field" (2). The characteristic symptoms of mercury vapor poisoning, with the exception of tremors, were not observed. One of the victims suffered from symptoms (mainly ataxia) for 15 years after exposure had ceased.
"At [patient] necropsy, generalized ataxia was referable to cerebellar cortical atrophy, selectively involving the granule-cell layer of the neocerebellum. The concentric constriction of the visual fields was correlated with bilateral cortical atrophy around the calcarine fissures" (2). This was originally reported in 1954 and later methylmercury poisoning was referred to as Hunter-Russell syndrome. The emergence of a methylmercury poisoning epidemic, Minamata disease, coincided with these years.
Thereafter, cases of organic mercury poisoning were reported in the United Kingdom, the United States, Canada, and Sweden. Since organic mercury compounds were used mainly for seed dressing, most victims were workers in chemical manufacturing plants and farmers and members of their families who accidentally ingested dressed seeds.
From these cases of accidental human exposure, the health hazard of methylmercury and other organic mercury derivatives had been well recognized by the 1950s. Despite this awareness, however, Minamata disease occurred in the same decade, and later methylmercury poisoning surfaced in Iraq.
M Dama~Disase
Minamata disease was defined as methylmercury poisoning that occurred among the people living along Minamata Bay in Kyushu, Japan (3). The way in which the victims became exposed to methylmercury was uncommon; they consumed substantial amounts of contaminated fish and shellfish. The source of methylmercury was effluent from a chemical company where mercury was used as a catalyst to produce acetaldehyde. Although methylmercury concentration in the seawater was not high, it was concentrated as it ascended the food chain and thus was in the fish and shellfish that were the staple diet of the villagers. The concentrations of methylmercury in the fish were high enough to cause methylmercury poisoning. Minamata disease is evidently unique in its origin as it involved the bay's ecosystem.
Minamata disease was first officially reported on 1 May 1956 to the public health authority of Minamata, Kumamoto prefecture (4). During the preceding 10 days, Dr. Hosokawa, the head of the hospital that was affiliated with Chisso (the responsible company), and his colleague experienced two infantile cases of an unknown disease that resulted in death. Since the two infants were sisters and so severe a disease occurred in one family at the same time, the doctors felt that the situation required serious attention and reported it to the public health authority. Moreover, before these two infantile cases, they had dealt with sporadic occurrences of a similar disease (5).
Abnormal gait, dysarthria, ataxia, deafness, and the constriction of the visual field were the main symptoms (6). It was also common to find emotional lability in the form of euphoria or depression. Serious cases displayed states of mental confusion, drowsiness, and stupor. Sometimes, however, the victims were restdess and prone to shouting, which often led into coma.
After Dr. Hosokawa's official report, a committee to study this serious disease consisting of representatives from Minamata City, the affiliated hospital, the municipal hospital, and the Minamata Medical Association was formed. This was called the Kibyou Taisaku Committee, which literally translates as the antimysterious disease committee. The committee found 30 cases within several months. The epidemic's first case was reported in December 1953. Then 10 and 11 more patients were confirmed whose onsets began in 1954 and 1955, respectively ( Figure 1). The prognosis was poor, and more than 30% of the cases were fatal.
In August 1956, Kumamoto University School of Medicine was asked to join the committee and the Study Group of Kumamoto University was organized. The initial epidemiologic study revealed an entire range of characteristics related to "the mysterious disease" (7). The disease occurred regardless of age. Although family dustering was observed, there was no proof of infectious transfer. Most of the population used wells, but wells were not involved. Ninety percent of the victims' households were related to fishery, whereas less than 30% were in the control group formed from the households in the neighborhood. More than two thirds of the households consumed fish caught in the bay every day in substantial amounts (sometimes several hundred grams and even up to 1 kg per person in one meal); among the control group, only 6% ate local fish daily and in lesser amounts. The death rate of domestic cats in the vic-tims' households was also higher than that of the controls; during the period of 1953 to 1956, 50 cats out of 61 died in the victims' houses whereas only 24 out of 60 died in the control houses. These epidemiologic findings clearly indicated that substantial fish consumption was the cause of the mysterious disease. These findings also demonstrated that a toxic agent, not a biologic one, in fish was responsible. The study group suggested a ban on the catching and selling of fish from the bay although the local authorities were opposed to this policy.
Having occurred by a unique route of exposure, various extraordinary phenomena and ecological changes preceded the outbreak of Minamata disease; floating dead fish and empty shellfish had been observed a few years before. In one area of the bay, observations of floating fish date back to 1949 (4). Crows were also affected. Cats that were housed in the villagers' homes showed symptoms similar to those manifested in human victims; the cats showed ataxic gait, slowness, and unsteady Environmental Health Perspectives * Vol 104, Supplement 2 -April 996 movement. Sometimes they dashed around in a circle and ran hysterically, the latter causing some of them to jump into the sea and drown. There was no evidence that these events had been seriously recognized by the local authorities. Witnesses were later collected in the epidemiologic studies conducted by the study group of Kumamoto University School of Medicine.
Early pathologic examinations (6) of victims suggested that the disease was encepalopathia toxica (toxic encephalopathy). Similar pathologic changes were found in affected cats, birds, and even fish.
It is noteworthy that characteristics of the disease were revealed and the study group concluded by the end of 1956 that an unidentified toxic agent in fish was responsible for the disease.
A long period remained, however, before the causal agents could be specified. First, manganese was suspected in November 1956, followed by selenium in April 1957 and thallium in 1958; however, no link between these agents and the disease could be found in feeding experiments. Moreover, clinical and pathologic findings did not support any of these substances as the causal agent (6).
Finally in 1957, organic mercury was first suspected. In pathologic examinations of four cases, lesions in the granular cell layers of the cerebellum were noted (6). Professor Takeuchi, at the Department of Pathology in Kumamoto University consulted pathologic textbooks and found that carbon monoxide and mercury poisonings caused such lesions. A volume that followed the pathologic texts was published in 1958, and a chapter of this volume introduced the study by Hunter and Russell (8). The pathological changes described in the book resembled the findings in the cases of Minamata disease. Since the study group was unable to chemically analyze mercury at that time, Professor Takeuchi and his colleagues tried and succeeded in identifying mercury histologically.
Clinical observations of constriction of the visual field and ataxia also indicated organic mercury poisoning (9). Professor Tokuomi of the Department of Internal Medicine came across a clinical toxicology book (10) in April 1957 that classified symptoms and listed possible agents. Under the item "ataxia," alkylmercury was named with other agents such as atropine, barbiturates, etc. Alkylmercury was also listed as an agent under "restriction of visual fields." At this point, he almost determined the toxic agent responsible, but the conventional wisdom of chemistry did not support this idea; because mercury, especially alkylmercury, was expensive, it did not seem logical that such materials would be discharged. Moreover, it was not understood how alkylmercury could have been synthesized. Thus, Professor Tokuomi abandoned the idea that alkylmercury was a possible agent.
Later, when Professor Takeuchi almost concluded that alkylmercury was the causal agent, Professor Tokuomi once again strongly suspected alkylmercury and decided to reexamine the patients. In addition to making clinical observations, he noted an increase in urinary excretion of mercury, while the administration of British antilewisite (BAL) further revealed increased excretion of mercury in urine (9).
After the establishment of a chemical analysis for mercury, environmental investigations directed by Professor Kitamura of the Department of Public Health also showed elevated mercury concentrations in sediments near the factory waste-water outlet (7). Moreover, organ samples from the victims and affected cats contained high concentrations of mercury.
These results were reported by the Study Group of Kumamoto University in July 1959. Their conclusion was that "Minamata disease occurred by eating contaminated fish and shellfish and organic mercury is most suspected as the causal agent" (6). Though their findings seemed conclusive, arguments about the causal agent continued for several years, partly because the indicated causal agent linked legal responsibility to the company. In addition, the methylmercury synthesis mechanism was not clarified until 1964.
A careful review of the literature resulted in the discovery of a paper published in 1930 that described a type of mercury poisoning different from typical (metallic) mercury vapor poisoning (11). The different type of mercury poisoning was observed among workers in acetaldehyde plants who handled mercury-containing sludge. They did not have stomatitis, which is commonly observed in mercury vapor poisoning. The author had even suspected that mercury in its organic form could have been the cause of the poisoning. Therefore, this different type of mercury poisoning had already been described before the observation of Hunter (12). Moreover, the formation of organic mercury as a by-product in the production of acetaldehyde using mercury was also suspected. Somehow, this literature was not found by the study group.
It took a long time to reach the conclusion that the organic mercury ingested by fish and shellfish was the cause of Minamata disease. However, it was rapidly concluded from the epidemiologic study that the disease was caused by an unidentified toxic agent and that fish and shellfish were involved. Here lie the strength and limits of epidemiology: it is not difficult to recognize a risk factor but it is difficult to specify the causal agent. Considering the state of analytical chemistry at that time, it was more difficult to identify the toxic agent than it is now. It is regrettable that the local authorities did not prohibit fishing in the bay in the early stages of the epidemic of Minamata disease. The conclusion to be drawn after reviewing the events that occurred at the onset of Minamata disease is that epidemiology is able to provide enough evidence to prevent the spread of an unknown disease, even though the specific agent involved has not been determined.
Fetal Minamata Disease
Fetal Minamata disease was first detected in 1958 by Professor Kitamura and his colleagues in the Minamata Bay area (13). They found nine infants who manifested a severe disease resembling cerebral palsy during their epidemiologic investigation. The incidence of the cerebral-palsy-like-disease was extremely high among infants who were born in and after 1955. Of 188 births in the area during 1955 to 1958, 13 cases were found. The incidence rate was calculated at 6.9%. Later, three more cases surfaced involving mental retardation and minimal neurologic symptoms. By 1974, 40 cases were confirmed as fetal Minamata disease.
Examination of these children revealed the following signs and symptoms in high incidence: mental retardation, cerebellar ataxia, primitive reflex, and dysarthria in all children (17/17), seizure in 82%, and pyramidal signs in 75%. Sensory disturbance, constriction of the visual fields, and hearing impairment could not be examined because of the serious conditions of the patients.
It was a tradition in Japan to preserve a part of the umbilical cord that remained on a baby after birth which later fell off. Methylmercury concentrations in the cords of the victims were high, and exposure to mercury was thus confirmed.
The mothers of these children had seemed healthy at the time their children were confirmed to have fetal Minamata disease. However, 11 mothers out of 15 Environmental Health Perspectives -Vol 104, Supplement 2 * April 1996 showed slight symptoms of Minamata disease in 1962. Later the mothers developed further symptoms, and in 1974, 57% of these mothers experienced constriction of the visual field, one of the typical symptoms of Minamata disease.
From the experience in Minamata, birth control (actually abortion) was advised by the local government to women of childbearing age who lived in the polluted area and who had hair mercury concentrations of 50 ppm or higher. Only one case of fetal Minamata disease was confirmed in Niigata by 1974 (4).
Methylmercury Poisoning in Iraq
Since organic mercury compounds have been used as seed dressings, poisoning by eating dressed seeds (mainly wheat) have occurred repeatedly (14). In Iraq, three epidemic poisonings were reported: one in 1955 to 1956, another in 1959 to 1960, and the third and largest outbreak in 1971 to 1972 (15). These outbreaks were caused by the distribution of seed grain treated with alkylmercury compouds. Rural people consumed the grain to make homemade bread. The total number of official victims was 6530 including 459 deaths. Symptoms were paresthesia or malaise followed by ataxia, visual field constriction, and hearing impairment.
In the investigation of the tragedy, dose-effect and dose-response relationships were established. Since there was possibly a background incidence, a hockeystick model, which is composed of a horizontal line and a sloped line, fitted well. In addition, a relationship between mercury concentrations in the hair and blood was also established. Since mercury concentration in hair strands recapitulates the history of methylmercury exposure, analysis of hair mercury provided abundant information about the course of exposure. Fetal Exposure to
Methylmercury in Iraq
In the Iraqi outbreak (15)(16)(17), babies with in utero exposure to methylmercury were investigated for physical and mental development. The mothers were interviewed as well. Exposure was estimated by the peak mercury concentration in a single hair strand from each mother.
A scoring system of examination results was adopted in the investigation. Although individual scores exhibited variability, a dose-response relationship was found. Statistical analysis suggested greater effects in boys than in girls.
The data were statistically analyzed in detail to establish a dose-response relationship between the effect and the hair mercury concentration (18). Both logit and hockey-stick models were fitted to the data. From these analyses, the estimated lowest effect level (ELEL) was proposed as a threshold for human populations.
Recent Epidemiological Studies
Since fish-eating populations are exposed to the threat of methylmercury, effects of in utero exposure to methylmercury have been studied (Table 1). In New Zealand, a group with high fish consumption (more than 3 times per week during pregnancy) was identified and the risk of in utero methylmercury exposure was evaluated (23). When the children were 4 years of age, they were tested with the Denver Developmental Screening Test. Children born to mothers with hair mercury levels higher than 6 ppm had a higher prevalence of abnormal results. More comprehensive examinations were given at 6 years of age. At this age, children with Learning-achievement (rat) Abbreviations: DDST, Denver Developmental Screening Test; WISC-R, Wechsler Intelligence Scale for Children-Revised. 'Brain area whose function is associated with the performance of the behavioral task. bData from the species provided the rationale for including the behavioral task. methylmercury exposure performed worse than children with less exposure, but the variance explained by methylmercury exposure was small. Currently possible neurobehavioral outcomes of prenatal methylmercury exposure are being evaluated in large-scale prospective studies on human fish-eating populations. In the Seychelles (22), children up to 5 years old are being studied in terms of development of cognitive functions and more specific effects. Test items were selected based on the preceding reports on behavioral consequences of prenatal lowlevel methylmercury exposure in human as well as in nonhuman primates. In the Faroe islands (21), a cohort of 7-year-old children being studied. Test items were chosen so as to cover a wide variety of behaviors, but at the same time, maximize the specificity of the evaluated functions. Results of these studies are yet to come but are expected to reveal possible neurobehavioral consequences of perinatal methylmercury exposure.
Neurobehavioral Profile of Prenatal Methylmercury Toxicity in Experimental Animals
Past experiences have shown that fetuses are much more vulnerable to methylmercury exposure than adults. In the Minamata disease epidemic and in the methylmercury poisoning in Iraq, infants were affected by in utero exposure. Methylmercury readily crosses the placental barrier and is transported to the developing nervous system. Embryos and fetuses have been considered much more susceptible to methylmercury than adults. In Minamata, mothers with minimal symptoms, such as numbness of the extremities and perioral region, gave birth to severely affected infants. Moreover, pathologic changes observed in the patients of fetal Minamata disease were much more destructive, presumably because the architecture of the nervous system in the fetuses was under development during the in utero exposure to methylmercury. In the Iraqi tragedies, perinatal exposure cases were observed. How in utero exposure to methylmercury affects postpartum life is of interest and importance in terms of susceptibility. In this section, therefore, the focus is on the neurobehavioral consequences of in utero exposure to methylmercury. Because most of the regulatory agencies required behavioral tests in rodents, rodent and primate studies are described separately.
Studies in Rodents
Since the pioneering study of Spyker et al.
Abbreviations: gi, gastric intubation; sc, subcutaneous injection; diet, food containing methylmercury compounds; GD, day(s) of gestation. Impaired in 0.5 and 5.0 mg/kg groups (30) 5.0 x4 (gavage) Abbreviations: ip, intraperitoneal injection; sc, subcutaneous injection; gi, gastric intubation; diet, food containing methylmercury compounds; GD, day(s) of gestation. 6.4 x 1 (gi) GD 15 "Step-down" passive avoidance 60 Rapid extinction (35) Abbreviations: iv, intravenous injection; sc, subcutaneous injection; gi, gastric intubation; GD, day(s) of gestation. and Bousch (28). In mice, no change or slight retardation was observed. The latter was partly counteracted by co-administration of selenium. Retardation of development of swimming ability was an important result shown by Spyker et al. (1) and was observed in most of the studies. Cognitive Functions (Tables 4-6). Most investigations showed impairment in mice and in rats in learning a maze or water escape.
Sensory Functions ( Table 7). Most of the behavioral studies in this functional domain have been done on visual functions of primates and a few have been done on rodents. Elsner (40) trained rats to press a lever with predetermined ranges of force and time. The impaired performance of methylmercury-exposed rats was considered to be a result of deficit in tactile-kinesthetic systems.
Motivation and Arousal Behavior (Tables 8-10). In mice, spontaneous activities were decreased; the results were inconsistent in rats. Selenium supplement partly counteracted the hypoactive effects of methylmercury (25). In the open-field tests, two investigations employing an identical strain of mice showed comparable results: longer latency, decreased urination, and increased backing. In rats, however, no change was observed, although increased locomotion was found when challenged with amphetamine. Increased susceptibility was observed in two studies although inducing methods were different. Soeial Funetions (Table 11). While three studies found slight or no effects on ultrasonic vocalization, Elsner et al. (48), using highly sophisticated devices, observed significant differences between the treated and control animals. Rats exposed to methylmercury were found to be more aggressive than vehicle control in dyadic encounters (51).
Studies in Rodents: Methylmercury as a Model Agent
In the last decade, several attempts have been made to evaluate various behavioral tests as end points of prenatal neurotoxic insults (29,30,52,53). Because methylmercury was known as a typical behavioral teratogen, it was included in these attempts as a model agent. Because various aspects of behavioral functions were examined in each of these studies, results of these studies will be compared in terms of the toxicity profile of methylmercury.
Environmental Health Perspectives * Vol 104, Supplement 2 -April 1996 evaluation of swimming ontogeny and Biel maze learning should be included because of their sensitivity to methylmercury exposure. Collaborative studies were also done by Elsner and his colleagues (30,53). In the first trial (53), female rats were given methylmercury in drinking water at concentrations of 0, 1.5, or 5 mg/l from 2 weeks before pairing until weaning. Among the various test items examined, a discrete trial spatial alternation task was shown to be the most sensitive, both in terms of effective dose 50% (ED50) and of no toxic effect level (NTEL). In the second trial (30), a wider range of doses was employed to include lower exposure levels. Thus, rat dams were administered 0.025, 0.05, 0.5, or 5.0 mg/kg/day of methylmercury during gestational days 6 to 9. Among the behavioral tests, the discrete trial spatial alternation task was found, as it was in the first trial, to be the most sensitive, with effects detectable in the 0.05 mg/kg/day group. It should be noted that differences in performance in a visual discrimination task, another rather demanding operant task, could only be detected at doses of 5.0 mg/kg/day, the largest dose employed.
Studies in Nonhuman Primates
Motor Funetions. Contrary to the rodent studies, little work has been done in this category in primates (Table 12).
Cognitive Functions. Gunderson et al. (57) reported that exposed infant monkeys paid less visual attention to novel stimuli. The result was interpreted as a deficit in visual recognition memory. Object permanence development in infants (55) and delayed alternation in adult monkeys (56), both assumed to be tests of spatial memory, were examined with one cohort of monkeys. Of these two test paradigms, only the object permanence development was impaired by methylmercury. Thus monkey studies so far did not show any persistent cognitive deficits caused by in utero methylmercury exposure.
Sensory Functions. Taking advantage of the similarity between the visual system of monkeys and humans, Rice and Gilbert (59) examined visual effects of prenatal and postnatal exposure to methylmercury. Spatial vision was affected in both the studies, but temporal vision was impaired only by exposures that had started before birth.
Social Functions. By observation and coding of elements of behavior in monkeys, Burbacher et al. (60) found less frequent social behaviors among the exposed groups.
What Do These Results Tell as a Whole?
The experimental studies have shown that some of the test items detected some behavioral alterations caused by prenatal exposure to methylmercury, at least when high doses (but not high enough to cause severe maternal toxicity or fetotoxicity) were given to the animals. In this sense, a proper combination of these tests would have been successful in detecting some effects of prenatal methylmercury exposure, although a given single item might not have produced a positive result.
Some behavioral tests were shown to be particularly sensitive to prenatal methylmercury exposure. Among others, the spatial alternation test (30), the tactilekinesthetic test (40), and the DRH task (39) showed deviations from control at very low dose levels, At these doses, other simple tests such as those included in a functional observation battery, would fail to show any changes. It is unknown, however, whether such differential sensitivities among the tests reflected the nature of the behavioral tasks per se or reflected the nature of the effect of methylmercury. Evaluation of these tasks against other agents may show that the latter was the case. On the other hand, more mechanistic analyses of these behaviors might reveal the inherent sensitivities of these tests, which would support the former explanation. It should also be noted that the reproducibility of these test results must be demonstrated; e.g., Elsner (40) could not reproduce the deviation in the spatial alternation task (30) obtained in their first trial, and no laboratory has published rodent behavioral studies that showed effects from the same low level of exposure as was demonstrated in the DRH studies (39).
Thus, it is not clear from these tables, which cover a broad spectrum of behavioral functions, whether there are any functional categories particularly vulnerable to prenatal methylmercury exposure. It may be that these results simply indicate that the behavioral consequences of prenatal and during pregnancy to 8 months passive nonsocial behavior increased aSame cohort of monkeys was used. methylmercury exposure are widespread among various behavioral functions. Also, the fact that most of the tests (except for some tests with primates) were apical rather than specific to one functional category might obscure any profile that might be present. For example, the effects observed in some learning evaluations, such as Biel maze (52) or the DRH operant task (39), might result from motor incapacity rather than from learning. Likewise the change in audiogenic startle habituation might result from either ototoxic effect or a learning deficit (61,62). These facts may not be problematic if one does not intend to characterize the neurotoxicologic profile but only intends to detect some adverse effects of methylmercury. It is clear, however, that for such a characterization, one must seek another set of test items that is more specific to each functional category.
Comparison of Human and Animal Data on Neurobehavioral Effects of Prenatal Methylmercury Exposure
Gaps beween Animals and Humans Burbacher et al. (63) thoroughly reviewed the literature dealing with neuropathologic and/or neurobehavioral effects of prenatal methylmercury exposure in humans, nonhuman primates, and rodents. They concluded that neurotoxicity of methylmercury in terms of behavioral and pathologic effects had remarkable similarities among humans, (nonhuman) primates, and small mammals at high levels of exposure (i.e., brain mercury levels of 12-20 ppm) and that at moderate or low levels of exposure, neurobehavioral effects were regarded as similar when functional categories (e.g., motor, sensory, cognitive, etc.) rather than specific end points were compared. To be exact, they observed that at least two of three species shared such effects as "early reflex behaviors, motor coordination, visual functioning, and complex performance" (63) as a result of prenatal methylmercury exposure. It should be noted that these shared responses covered a broad range of behavioral categories.
Despite the conclusion reached by Burbacher et al. (63), there seem to be some gaps between human and animal studies dealing with neurobehavioral consequences of prenatal methylmercury exposures. The first gap is the level of specificity. As discussed above, epidemiologic studies of fish-eating human populations have looked, and are continuing to look, into behavioral end points in a more specific way than experimental studies in animals have. For example, the New Zealand study (19) suggested some functional domains such as fine-motor functions and language skills were vulnerable to methylmercury, although no clear-cut profile had been delineated. The Seychelles study (22) has adopted test items that are more or less connected with specific functional domains. The Faroe Islands study (21) is the most explicit in this regard because it systematically chose behavioral end points that are related to focal brain pathology. In comparison, most of the rodent studies usually adopted rather apical tests. On the other hand, what the behavioral changes demonstrated in rodents imply regarding human behavior might not always be clear-cut. For example, what kind of behavioral deficit in humans does the impaired DRH performance in rats predict? How well does an effect on spatial memory in rodents predict the effect on spatial memory in humans? These are issues that need to be answered to establish a test battery that aims at predicting the neurotoxicologic profile of an agent, which, in turn, can be extrapolated to human behavior.
The second gap relates to the periods of testing. In the above-mentioned human studies, developmental profiles of infants and children, including such higher functions as cognition or learning, were examined. Such emphasis on behavioral evaluation in the developmental period is partly because any follow-up studies extended into adulthood will be extremely difficult to conduct in natural human populations where socioeconomic factors exert significant impacts on behavioral development and where relocations of the subjects are not uncommon. On the other hand, in rodent experiments behaviors were usually examined in the adults; this is especially true for examining complex behaviors including schedule-controlled operant behaviors. Thus, developmental profiles of such complex behaviors have been rarely obtained except for the quantitative analyses of the development of ultrasonic vocalization (48) or of auditory startle habituation (29). In monkeys as described above, effects of in utero exposure on spatial memory were apparent in infants but not in adults (55,56), suggesting a reversible nature of the effect on this function. It should be noted, however, that the test techniques employed in the two studies were not identical or even similar, and, as the authors have acknowledged (56), they might evaluate different functions.
The third gap refers to the difference in the types and periods of exposures. In most of the rodent studies, methylmercury was administered several times between 5 and 15 days of gestation. Since the concern regarding human populations is related to exposure derived from fish consumption, lower level exposures with longer durations (including both preconception as well as neonatal periods) should be evaluated, although neonatal treatment might confound the results by affecting the dams' behavior. Also, it may be important to examine differential susceptibility to methylmercury among different stages of both the gestational as well as neonatal periods.
Development of Specific Test Batteries
Methylmercury has acquired a unique status among hazardous chemicals in our environment in that a) existence of developmental neurotoxicity is apparent in both humans and animals, b) a relatively large body of neurologic and behavioral data is available in both humans and animals when compared to most other chemicals, and c) the ongoing large-scale epidemiologic studies are evaluating behavioral functions in more specific ways than routine neurologic or psychologic batteries. Thus, by taking into account current and expected future findings, as well as the human-animal gaps described above, methylmercury may now serve as a model agent for developing a more specific test battery of animal behavior that could be used to predict possible human hazards resulting from prenatal exposure to other chemicals.
To develop such a battery, it is essential to have a choice of behavioral domains or categories and a choice of specific behavioral items for each domain. For the domains or categories, the choices adopted by Rees et al. (49), the National Center for Toxicological Research (64), or the Faroe Islands study (21) are useful as guidelines. In the remaining part of this paper, we will focus on the second step of the procedure.
To choose specific behavioral items for a given domain, there are two possible approaches. The first is an approach in which a behavior of an animal that is functionally or operationally analogous to human behavior of concern will be chosen as the test item. For example, the results of the discrete trial spatial alternation task used by Eisner et al. (30) to the attention disorder (and minimal brain dysfunction) seen in human children. Testing the tactile-kinesthetic system of rats (40) was suggested by human studies that showed a relationship between attention deficit disorder and poor development of the tactile-kinesthetic system in children. If such a functional analogy could be validated with some appropriate experiments, this type of approach could provide a means to directly predict human behavior on the basis of rodent behavior. A problem with this approach, however, is that an apparent similarity between behaviors exhibited by different species does not always guarantee the same underlying neurologic mechanisms. Thus, extensive validation is required in this regard.
As an alternative approach, one can examine a particular behavior with a known neurological mechanism that is related to human behavior of concern. Stanton and Spear (61) argue for this approach, suggesting that such neural comparability became known not only for sensory functions but also for a number of behavioral functions that might be examined in a psychologic evaluation. If so, this can be a powerful approach, especially when a prediction of the neurotoxicologic characteristics of a given agent in a human population is needed. Although it seems that lesion studies (65) or pharmacologic studies may provide valuable information in this regard, few attempts at developing test batteries for neurobehavioral toxicity seem to have fully used such information.
Recently, a behavioral test battery that may be used for evaluating several aspects of central nervous system function in primates has been developed (64). The battery includes several test items (Table 13), some of which were chosen by the functional Table 13. Behavioral functions and items examined in the National Center for Toxicological Research Operant Test Battery.
Behavioral function
Behavioral item Motivation (to respond for reinforcement) Progressive ratio task Learning Incremental repeated acquisition task (hippocampus)a Position and color discrimination Conditioned position responding task (prefrontal cortex) Time estimation Temporal response differentiation task Short-term memory and attention Delayed matching-to-sample task (hippocampus) aBrain region that is thought to be related with the task is in parentheses.
analogy of certain behaviors between humans and monkeys and others because of suspected correlation between the task and the integrity of certain brain areas (66). Thus, the choice of the test items used both of the above approaches. The battery was tested against acute effects of several reference compounds to demonstrate differential sensitivity of each task to different compounds (64). Although much work has to be done to validate each item and demonstrate the sensitivity of this battery, especially for chronic effects, this approach seems worthy of extensive pursuit. It is desirable to have a similar specific battery applicable to rodents, because it is much easier to run parallel experiments of neurochemical and neuropharmacologic examinations with rodents than it is with primates. Although some human behavioral functions exist that cannot be assessed in rodents, e.g., color vision or language skills, rodents can accomplish highly complex behaviors such as 24-arm radial mazes (67), or repeated acquisition of correct sequences (68). These complex behaviors may be used for examining specific functional domains, such as those evaluated in the monkey battery. Finally it should be pointed out that there are some basic items that have been dropped in most of the behavioral studies.
The first one is the determination of the internal dose. Lack of an appropriate measure of the internal dose, e.g., brain Hg concentration, makes the significance of certain behavioral findings (regardless of whether they are positive or negative) somewhat ambiguous. In the case of in utero exposure, the dose should be determined not only at the time of testing but also during the prenatal period (69). The second is potential influences of the subjects' genetic background. Although the influence of genetic background on kinetics (excretion and distribution) of methylmercury has been evaluated, influences on behavioral effects seem to have scarcely been examined. In Iraq, individual differences were recognized in terms of the neurologic susceptibility of infants to prenatal methylmercury exposure as previously described. Individual differences were also a focus of consideration in choosing test items in the Faroe Islands study (21). Genetic background must be one of the determinants of such individual differences, and thus, requires further consideration. In general, systematic study of genetic influences on behavior is best conducted with rodents. In this respect again, a specific battery with rodents, if properly developed, would be of great value. | 2014-10-01T00:00:00.000Z | 1996-04-01T00:00:00.000 | {
"year": 1996,
"sha1": "47ce35f7d08e98d544f07a31ebc5b5934dbdc16e",
"oa_license": "public-domain",
"oa_url": "https://doi.org/10.1289/ehp.96104s2367",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47ce35f7d08e98d544f07a31ebc5b5934dbdc16e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
263332136 | pes2o/s2orc | v3-fos-license | Sporadic Lymphangioleiomyomatosis Disease: A Case Report
Pulmonary Lymphangioleiomyomatosis (LAM) is a rare disease of the lung and lymphatic system that primarily affects women of childbearing age. LAM is a progressive disease with a terrible prognosis, which worsens over time and is extremely difficult to treat. In this study, we discuss the case of a 31-year-old woman with LAM who was initially misdiagnosed with leiomyoma and the way that led to a true diagnosis and effective treatment. Following a precise diagnosis based on comprehensive clinical data and particular immunohistochemical tests, sirolimus treatment was initiated, and the patient entirely responded to the treatment. This case report demonstrated that LAM is an uncommon condition that is challenging to diagnose, which causes its treatment to be delayed.
•
Pulmonary Lymphangioleiomyomatosis (LAM) is a rare disease of the lung and lymphatic system that primarily affects women of childbearing age.The disease is distinguished by extensive nodular infiltration of the lungs, which is accompanied by the proliferation of smooth muscle-like cells.Shortness of breath and pneumothorax are the most typical clinical manifestations of this disease.
•
Lymphangioleiomyomatosis is difficult to diagnose and should be considered in the differential diagnosis, especially in women of childbearing age.A comprehensive clinical history is required for an accurate pathology diagnosis.Moreover, the histopathological and immunohistochemical investigation revealed that HMB-45, desmin, and ER were positive, while CD31, CD34, and CD10 were negative.
Introduction
Pulmonary Lymphangioleiomyomatosis (LAM) is a rare pulmonary disease that primarily affects women of childbearing age.Since LAM affects less than one in a million people, it rarely receives an early diagnosis.Although it has been considered a disease with a poor prognosis, recent therapies such as sirolimus are considered promising, especially if prescribed soon.When making early clinical impressions and recommending a pathologist, certain radiologic aspects can greatly assist the clinician.If the pathologist is provided with rich clinical and paraclinical data and a plausible diagnosis, they will sign out truthfully. 1 The disease is distinguished by extensive nodular infiltration of the lungs, which is followed by the proliferation of smooth muscle-like cells (SMLC). 1 Pneumothorax and shortness of breath are the most frequent clinical symptoms of this condition.Chylothorax and bloody sputum are also possible.LAM is a progressive disease that worsens over time and has a poor prognosis.Mutations in the tuberous sclerosis complex-1 (TSC-1) and tuberous sclerosis complex-2 (TSC-2) genes contribute to LAM development.The disease is caused by the proliferation of SMLCs over time, particularly in the lungs, lymphatic system, and pleurae.This abnormal growth causes the lung to develop holes or cysts.Many of the symptoms of LAM are similar to those of other lung diseases, making diagnosis challenging.These clinical manifestations are caused by SMLCs aberrant growth, which causes damage to the lung parenchyma, constriction of the airways, and lymphatic obstruction.
An abdominal computed tomography (CT) scan may reveal angiomyolipoma-related kidney lesions.Although a mixed obstructive and restrictive pattern may be detected, pulmonary function tests clearly demonstrate a progressive obstructive ventilatory deficit. 2 Clinically, sporadic (S-LAM) and tuberous sclerosis complex associated (TSC-LAM) are two types of LAM.
TSC-LAM is an autosomal dominant neoplastic disease that affects adult female patients more frequently. 3This is a case study of a well-known but uncommon condition.This study aimed to describe a rare condition to help in early diagnosis and distinguish it from similar diseases.Therefore, appropriate treatments could be initiated as soon as possible.
Case Presentation
The study was approved by the Ethics Committee of Shiraz University of Medical Sciences, Shiraz, Iran (code: IR.SUMS.REC.1401.234).Written informed consent was obtained from the patient for publication of this case report and any accompanying images.
On 21 January 2021, a 31-year-old woman was admitted to the General Hospital, Bushehr, Iran.She had been experiencing shortness of breath for several weeks and had no history of smoking.Additionally, there was no family history of any pulmonary disease.During the initial examination, ascites and a small amount of pleural effusion were discovered.Thoracentesis was performed, and some milky-colored fluid was sent for microscopic examination, analysis, and culture.The laboratory results indicated an exudative fluid with a negative culture and cytologic evaluation, in favor of a nonmalignant and noninfectious condition.Some milky ascitic fluid was also extracted, and its analysis likewise produced no conclusive diagnosis.Abdominopelvic ultrasonography revealed a solid mass containing cystic areas in the pelvis and adjacent to the left ovary.There was also a substantial amount of ascites.Pleural effusion was seen on chest imaging in both pleural compartments.In this case, a mass in the pelvis was found during the initial investigations, which underwent surgery and was totally excised.The surgical specimen was sent to pathologists without sufficient clinical information.Following histopathologic examination and immunohistochemistry (IHC) study, it was diagnosed as leiomyoma, which was negative for CD31, CD34, and CD 10, low for KI67, and positive for smooth muscle actin (SMA), desmin and estrogen receptor (ER).However, this impression could not explain the entire scenario of the disease.As a result of the request for a review of the embedded mass tissue in light of new clinical data, the pathologic report was revised to LAM based on histopathologic morphology, and a relevant IHC that was positive for HMB-45, desmin, and ER, low for KI67, and negative for CD31, CD34, and CD10.
Although the patient was discharged, her symptoms persisted and were managed with repeated pleural and abdominal taps.Finally, a pleural catheter was inserted, and milky fluid was continuously drained from it.
Eventually, she was visited by an internistpulmonologist, who evaluated a high-resolution CT scan (HRCT) of her chest, which revealed diffuse bilateral thin wall cystic lesions, tiny nodules, some parenchymal interstitial edema, and bilateral pleural effusion (figures 1-3).Spirometry was performed, and the results indicated an obstructive pattern with forced expiratory volume 1 sec/forced vital capacity (FEV1/FVC)=59% before and 71% after bronchodilator treatment, indicating the presence of a reversible obstructive component.Repeated thoracocentesis revealed a TG level of 240 mg/dL, which is high and within the range of chylothorax.This implied that the ascitic fluid was of the same type and was a chylous ascites.A consultation with a dermatologist was also done, who noted diffuse acne over the face, and upper trunk but no indication of any additional lesions, such as angiofibroma, shagreen patches, hypopigmented macules, etc.
A combination of Interstitial cystic lung disease as seen in the chest HRCT as well as an obstructive pattern of spirometry, chylothorax, and chylous ascites suggested LAM as the most likely diagnosis.Thus, we asked pathologists to review the specimens, and they confirmed LAM through histopathology and IHC study.Histologic examination showed spindle smooth muscle-like proliferation similar to leiomyoma with slit-like vascular gaps and some lymphoid aggregates, which were overlooked during the initial pathological examination.In addition to positive SMA and desmin markers, the IHC study revealed positive human melanoma black 45 (HMB-45), which was diagnostic for lymph angioleiomyoma (figures 4 and 5).
After starting sirolimus, the patient's clinical condition improved significantly.Then, after about six months, dyspnea and the ascites both disappeared.A subsequent HRCT demonstrated that pleural effusion and interstitial edema faded, however, the cysts persisted, necessitating further HRCTs for monitoring them in the upcoming months.
Discussion
The present case was diagnosed in a 31-yearold woman.The disease is distinguished by the aberrant proliferation of smooth muscle-like cells, particularly in the lung parenchyma and airway walls, as well as lymph nodes. 4Similar to other obstructive lung diseases, the condition progresses to constriction and obstruction of the airways, and eventually, to alveolar destruction and cystic disease in the lungs and lymphatic system. 5ccording to the dermatologist, the skin lesions seen in this patient were acne rather than angiofibroma, as seen in tuberous sclerosis.Moreover, no further skin lesions characteristic of tuberous sclerosis were observed, and there was no history of CNS involvement or clinical signs and symptoms.Therefore, the type of LAM, in this case, was sporadic (S-LAM) with no associated tuberous sclerosis.
Primary symptoms of the disease include activity-induced dyspnea (shortness of breath) and spontaneous pneumothorax (lung collapse) in 49% and 46% of patients, respectively. 3hese clinical manifestations are caused by lung parenchymal damage, the constriction of the airways, and lymphatic obstruction caused by the abnormal proliferation of smooth muscle-like cells. 6Obstruction of arteries, lymphatics, and airways leads to the accumulation of chylous pleural fluid, hemoptysis, airflow obstruction, and pneumothorax. 7n many aspects, LAM cells act like metastatic tumor cells.These cells seem to have migrated to the lungs from an extrapulmonary source.The inactivation of the tumor suppressor TSC-2 is associated with abnormal proliferation, differentiation, migration, and invasion of LAM cells in the lungs.The cellular and molecular mechanisms of neoplastic transformation and destruction of the lung parenchyma by LAM cells still remain unknown. 8ypically, diagnosis is delayed for five to six years.This condition is often misdiagnosed as asthma or chronic obstructive pulmonary disease. 9In this study, the disease was not diagnosed until about six months after the initial visits.
LAM may also be associated with fluidfilled hypodense masses in the retroperitoneal portions of the abdomen and pelvis, which are seen in approximately 30% of LAM patients.Typically, no intervention is required.A biopsy or resection may result in a long-term leak. 10adiographic findings vary depending on the severity and development of the disease.CT scan also shows diffuse thin wall cysts in the lung parenchyma.A ground-glass appearance due to hemosiderin deposition might also be noticed. 11In this case, a preliminary CT scan of the lungs revealed diffuse and bilateral thin-wall gas-filled cystic lesions.The intervening lung parenchyma appeared hazy, with conspicuous main fissures and interlobular septa, indicating interstitial pulmonary edema.
In most cases, clinical and radiological findings are insufficient to make a conclusive diagnosis.Thus, a biopsy is required.Video-assisted pulmonary thoracoscopy is the most definitive procedure, and the diagnosis is confirmed with the distinctive immunohistochemical staining, that is specific for smooth muscle cells such as actin, desmin, or HMB-45.Additionally, the cytology of chylous ascites, abdominal nodules, or lymph nodes can also be used to make a diagnosis. 12In this case, the histopathologic examination and IHC study confirmed the clinical impression.
In addition, it seemed that the initial clinicians failed to diagnose the disease, because it was so rare.Besides, they lacked sufficient knowledge about it, and they disregarded the pulmonary and radiologic findings, which were a golden key for the diagnosis of this rare disease, and could be avoided, if a radiologist had been consulted earlier.The pathologists were also unable to diagnose the true nature of the disease due to its rarity and the lack of clinical data provision.Indeed, it is suggested that clinicians and para-clinicians including pathologists and radiologists establish a close communication and exchange data to reach the right diagnosis, especially in complicated and rare conditions such as ours.This is the magical takeaway lesson of this study: It is not necessary or possible to know everything about all diseases, especially rare ones.However, teamworking and data exchange can get us there.Fortunately, the patient was eventually visited by a pulmonologist who was knowledgeable about this rare disease and proposed his clinical impression, which was confirmed by the pathologist after reviewing the previous specimen.
Given that both LAM and metastasizing leiomyoma affect young women, it is crucial to distinguish between the two conditions.Moreover, there are other similarities between metastasizing leiomyoma and LAM, such as smooth muscle proliferation, pulmonary location, and hormonal reliance as indicated by progression during child-bearing years. 13Pitts and colleagues mentioned the radiological, microscopic, and particular distinctions between metastasizing leiomyoma (ML) and LAM, which is summarized in the following: Well-circumscribed solitary or multiple pulmonary nodules with a diameter of a few millimeters to several centimeters are seen bilaterally throughout the normal interstitium as part of the radiologic appearance of ML.Microscopic features are similar to those of primary pulmonary leiomyoma, with glandular structures resembling entrapped epithelium.Special studies include Estrogen RC + , Progesterone RC + , and HMB-45 + . 13adiologic appearance in LAM includes an interstitial pattern (which could be reticular, reticulonodular, military, or honeycomb), hyperinflation, and thin-walled cysts.Pneumothorax, pleural effusion, or chylothorax might be present.A high-resolution CT scan reveals several thin-walled cysts throughout the normal lung, with no preference for a zone.Microscopic features include disorganized, spindle-shaped atypical smooth muscle; such as cells that proliferate in lymphatics, alveoli, respiratory bronchioles, tiny arteries, cystic spaces, and centrilobular emphysema.Desmin + , vimentin + , actin + , and fibroblast antibodies are among the special studies, and certain cells are HMB-45 + , estrogen RC + , and progesterone RC + . 13ome studies reported that somatic mutations in the TSC-2 gene occured in angiomyolipomas and pulmonary LAM cells of women with sporadic LAM, which strongly supported the direct involvement of TSC-2 in the pathogenesis of this disease.Unfortunately, no mutation analysis studies were conducted in the present study.
Conclusion
LAM is a rare disease that affects women of childbearing age.The disease is challenging to diagnose and is usually diagnosed late.In women of childbearing age, LAM should be considered in the differential diagnosis of cystic pulmonary disease and spontaneous pneumothorax.Radiologic findings of this rare disease are pathognomonic enough to help with its early diagnosis and treatment.
Figure 1 :
Figure 1:In the Left figure, axial HRCT (high-resolution CT scan) at the upper lung level reveals multiple thin wall gas-filled cystic lesions in both lungs.The intervening lung parenchyma appears hazy, and the main fissures and interlobular septa are slightly apparent, suggestive of interstitial pulmonary edema (left is more than right side (L>R).Bilateral pleural effusion is seen (proved to be chylous).The right image shows follow-up a few months after treatment with sirolimus.Although the pulmonary edema and pleural effusion are being resolved, the cystic lesions persist.
Figure 2 :
Figure 2: This figure shows axial HRCT (high-resolution CT scan), sub-carinal level in the acute phase, and follow-up on the left and right, respectively.The findings are the same as those shown in figure 1.
Figure 3 :
Figure 3: The Left figure shows HRCT (high-resolution CT scan) at the lung base level.The same findings are seen here.The presence of pulmonary edema is evident.Patchy consolidation on right is suggestive of atelectasis.The right figure shows the follow-up, several months following treatment.No pleural effusion, pulmonary edema, or atelectasis are present.A tiny loculated pneumothorax on right implies previous complications and adhesions. | 2023-10-03T05:06:24.820Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "20d4bfc8c8c090315244ce01a726f04de9879c19",
"oa_license": "CCBYND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "20d4bfc8c8c090315244ce01a726f04de9879c19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90136886 | pes2o/s2orc | v3-fos-license | Src Tyrosine Kinases, G a Subunits, and H-Ras Share a Common Membrane-anchored Scaffolding Protein, Caveolin
Caveolae are plasma membrane specializations present in most cell types. Caveolin, a 22-kDa integral membrane protein, is a principal structural and regulatory component of caveolae membranes. Previous studies have demonstrated that caveolin co-purifies with lipid modified signaling molecules, including Gα subunits, H-Ras, c-Src, and other related Src family tyrosine kinases. In addition, it has been shown that caveolin interacts directly with Gα subunits and H-Ras, preferentially recognizing the inactive conformation of these molecules. However, it is not known whether caveolin interacts directly or indirectly with Src family tyrosine kinases. Here, we examine the structural and functional interaction of caveolin with Src family tyrosine kinases. Caveolin was recombinantly expressed as a glutathione S-transferase fusion. Using an established in vitro binding assay, we find that caveolin interacts with wild-type Src (c-Src) but does not form a stable complex with mutationally activated Src (v-Src). Thus, it appears that caveolin prefers the inactive conformation of Src. Deletion mutagenesis indicates that the Src-interacting domain of caveolin is located within residues 82-101, a cytosolic membrane-proximal region of caveolin. A caveolin peptide derived from this region (residues 82-101) functionally suppressed the auto-activation of purified recombinant c-Src tyrosine kinase and Fyn, a related Src family tyrosine kinase. We further analyzed the effect of caveolin on c-Src activity in vivo by transiently co-expressing full-length caveolin and c-Src tyrosine kinase in 293T cells. Co-expression with caveolin dramatically suppressed the tyrosine kinase activity of c-Src as measured via an immune complex kinase assay. Thus, it appears that caveolin structurally and functionally interacts with wild-type c-Src via caveolin residues 82-101. Besides interacting with Src family kinases, this cytosolic caveolin domain (residues 82-101) has the following unique features. First, it is required to form multivalent homo-oligomers of caveolin. Second, it interacts with G-protein α-subunits and down-regulates their GTPase activity. Third, it binds to wild-type H-Ras. Fourth, it is membrane-proximal, suggesting that it may be involved in other potential protein-protein interactions. Thus, we have termed this 20-amino acid stretch of caveolin residues the caveolin scaffolding domain.
tions. Thus, we have termed this 20-amino acid stretch of caveolin residues the caveolin scaffolding domain.
Caveolae are small bulb-shaped invaginations located at or near the cell surface (1). They represent a micro-domain of the plasma membrane (1,2). Although caveolae are present in most cells, they are most abundant in terminally differentiated cell types: endothelia, adipocytes, muscle cells (skeletal, cardiac, and smooth), and type I pneumoctyes (reviewed in Refs. 3 and 4). For example, in adipocytes they may occupy up to 20% of the total plasma membrane surface area (5). In striking contrast, caveolin and caveolae are reduced or absent in fibroblasts transformed by certain activated oncogenes (such as v-Abl or H-Ras (G12V)) (6).
In this regard, several independent lines of evidence suggest that caveolin may function as a scaffolding protein within caveolae membranes. (i) Both the N-terminal and C-terminal domains of caveolin face the cytoplasm, allowing them to freely interact with cytosolic molecules (22)(23)(24). In accordance with this membrane topology, caveolin remains inaccessible to biotinylation probes that have been used to efficiently label proteins that face the extracellular environment (25). (ii) Caveolin undergoes two stages of oligomerization. First, caveolin monomers assemble into discrete multivalent oligomers containing ϳ14 -16 monomers per oligomer (25,26). Subsequently, these individual caveolin homo-oligomers (4 -6-nm particles) can interact with each other to form caveolae-like structures in vitro (25-50 nm clusters) (25). (iii) A cytosolic membrane proximal domain of caveolin (residues 82-101) interacts directly with G ␣ subunits and H-Ras (19,27). This caveolin region preferentially recognizes the inactive conformation of these molecules, because mutationally activated G ␣ subunits (G s ; Q227L) and H-Ras (G12V) fail to interact with caveolin (19,27). (iv) Interaction of caveolin with purified heterotrimeric G-proteins functionally suppresses their GTPase activity, holding the G-pro-tein in the inactive conformation (27). Thus, caveolin may organize the formation of caveolae microdomains and orchestrate caveolae-related signaling events.
Here, we examine the structural and functional interaction of caveolin with Src family tyrosine kinases. We find that caveolin interacts directly with wild-type Src (c-Src) but fails to stably interact with a mutationally activated form of Src (v-Src). The Src-interacting domain of caveolin was localized to caveolin residues 82-101, and a peptide encoding this sequence dose-dependently suppressed the auto-activation of purified Src kinases (c-Src and Fyn). Furthermore, transient co-expression of c-Src with the full-length caveolin in 293T cells inhibited the kinase activity of c-Src.
Thus, it appears that three distinct classes of lipid-modified signaling molecules (G ␣ subunits (27), H-Ras (19), and c-Src (this report)) that normally co-purify with caveolin all recognize the same cytosolic membrane-proximal region of caveolin (residues 82-101). In addition, caveolin prefers the inactive conformation of these molecules, and caveolin binding can hold these molecules in the inactive conformation. As such, caveolin may function as a common membrane-anchored scaffolding protein for these and other cytoplasmic signaling molecules.
Our current observations are analogous to another family of scaffolding proteins (28), the AKAPs (A-kinase anchor proteins), which preferentially recognize the inactive conformation of protein kinase A, protein phosphatase 2B (calcineurin), and protein kinase C (␣and -isoforms) (29,30). As we find with caveolin, each of these enzymes is inhibited when bound to AKAP-79 (29,30).
Cell Culture-Insect Sf21 cells were provided by Dr. Takashi Okamoto (Massachusetts General Hospital/Harvard Medical School). Sf21 cells were grown in Ex-cell 400 medium, 10% fetal bovine serum, and 1% penicillin-streptomycin at 27°C. For double transfection experiments, 293T cells (gift of Dr. Kunxin Luo, Whitehead Institute, and Dr. Anthony J. Koleske, MIT) were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum, glutamine, and antibiotics, as described.
Construction of Recombinant Baculovirus-expressed c-Src and v-Src-Baculovirus constructs encoding either c-Src or v-Src (from chicken) were the generous gifts of Drs. Zhou Songyang and Lewis Cantley (Beth Israel Hospital/Harvard Medical School) and were constructed as described previously (33). Briefly, the cDNA encoding pp60 c-src was subcloned into a transfer plasmid vector pAC373 (34) using the BamHI site. A mixture of 2 g of recombinant plasmid pAC373(c-Src) DNA and 1 g of purified wild-type baculoviral DNA was transfected into insect Sf9 cells. Four days later, culture supernatants were removed and centrifuged at 1,000 rpm for 10 min. Clarified supernatants containing wild-type and recombinant viruses were plaque assayed on a monolayer of Sf9 cells. Occlusion negative plaques were picked and seeded onto 2.5 ϫ 10 6 cells. After 3 days of incubation, cells and culture supernatants were removed and centrifuged at 1,000 rpm for 10 min. The cell pellets were analyzed by both in vitro kinase assay and immunoblotting using anti-Src antibody. Those plaques testing positive for the presence of kinase-active pp60 c-src were selected for three rounds of plaque purification (35) The selected plaques were used as virus stock for producing c-Src protein by infecting insect Sf21 cells. Similarly, v-Src was constructed as a recombinant baculovirus (33).
Interaction of Recombinant Baculovirus-expressed Src with GST-Caveolin Fusion Proteins-The interaction of GST-caveolin fusion proteins with baculovirus-expressed Src was evaluated essentially as we described for the interaction of caveolin with baculovirus-expressed heterotrimeric G-protein ␣subunits (27) and H-Ras (19). Briefly, GST or GST-caveolin fusion proteins bound on glutathione-agarose beads were extensively washed first with PBS (1ϫ) and lysis buffer containing protease inhibitors (3ϫ). These beads contained ϳ100 pmol of a given fusion protein/100 l of packed volume. Approximately 100 l of this material was incubated with 1 ml of precleared Src lysates by rotating overnight at 4°C. After binding, the beads were extensively washed (6 -8ϫ) with wash buffer containing 50 mM Hepes, pH 7.5, 120 mM NaCl, 1 mM EDTA, 0.5% CHAPS, and protease inhibitors. Finally, associated proteins were eluted with 100 l of elution buffer containing 50 mM Tris, pH 8.0, 1 mM EDTA, 1% Triton X-100, 10 mM reduced glutathione, and protease inhibitors. The eluate was mixed 1:1 with 2 ϫ sample buffer and subjected to SDS-PAGE (10% acrylamide). After transfer to nitrocellulose, Western blot analysis was performed with an anti-Src mAb probe (1:1000 dilution, Oncogene Sciences). Horseradish peroxidase-conjugated secondary antibodies (1:5000 dilution, Amersham Corp.) were used to visualized bound primary antibodies by an enhanced chemiluminescence assay (ECL) (Amersham Corp.).
Caveolin-derived Synthetic Peptides-Caveolin peptides were designed based on the protein sequence of the N-terminal domain of canine caveolin. Peptides were synthesized using standard methodology and subjected to amino acid analysis and mass spectroscopy (Massachusetts Intitute of Technology Biopolymers Laboratory) to confirm their composition. The following four peptides were utilized: peptide 1 (NRDPKHLNDDVVKIDFEDVIAEPEGTHSF, caveolin residues 53-81); peptide 2 (DGIWKASFTTFTVTKYWFYR, caveolin residues 82-101); peptide 3 (IWKASFTTF, caveolin residues 84 -92); and peptide 4 (TVTKYWFYR, caveolin residues 93-101). Note that peptides 3 and 4 correspond to the N-terminal and C-terminal halves of peptide 2, respectively.
In Vitro Auto-phosphorylation of Src Family Tyrosine Kinases-Two to six units of either purified recombinant c-Src or Fyn (44 -132 ng/ml; Upstate Biotechnology, Inc) were incubated with caveolin peptides at concentrations of 0, 0.3, 1, 3, and 10 M in the kinase reaction buffer. Reactions were performed in a total volume of 50 l of kinase reaction buffer (20 mM Hepes, pH 7.4, 5 mM MgCl 2 , 1 mM MnCl 2 ). The reaction was initiated by addition of 15 Ci of [␥-32 P]ATP. After incubation for 15 min at 25°C, the reaction was stopped by addition of 2 ϫ SDS-PAGE sample buffer and boiling for 2 min. Phosphorylated proteins were visualized by autoradiography using an intensifying screen. Control samples omitting either either [␥-32 P]ATP or Src family tyrosine kinases showed no activity. The activity of purified c-Src and Fyn was defined by the manufacturer (UBI, Inc.) as follows: in kinase assays that use only [␥-32 P]ATP one enzyme unit transfers 0.1 nmol of 32 P/min to a peptide substrate, i.e. cdc2 (residues 6 -20). Also, purified c-Src had a specific activity of 900,000 units/mg.
Co-expression of c-Src and Caveolin in Mammalian 293T Cells-Untagged caveolin and c-Src were co-expressed in 293T cells by cotransfection using a modified calcium phosphate precipitation procedure as described previously (19). Briefly, 293T cells were plated in 10-cm culture dishes at ϳ1 ϫ 10 6 cell/dish in Dulbecco's modified Eagle's medium containing 10% fetal calf serum (complete medium) and cultured until cells reached ϳ80% confluency. Just prior to transfection, the medium was removed and replaced with 4 ml of the fresh complete medium. For each 10-cm dish, the transfection mixture was prepared by adding 6 g of either c-Src and caveolin plasmid DNA (10 g of DNA for single transfection) to sterile H 2 O such that the final volume is 438 l. After addition of 62 l of 2 M CaCl 2 to the DNA/H 2 O solution, 500 l of 2 ϫ Hepes-buffered saline (pH 7.05) was added dropwise to the mixture with gentle agitation. Within 1-2 min, this mixture was added to the cells; dishes were gently agitated to ensure uniform mixing. 10 h post-transfection, the culture medium was replaced with fresh complete medium. 48 h post-tranfection, 293T cells were harvested and used for immune complex kinase assays. For transient expression, caveolin was subcloned into the MCS (HindIII-BamHI) of the vector pCB7 containing the hygro R marker (The vector was a gift from Dr. James E. Casanova, Massachusetts General Hospital) (24). A construct encoding the chicken c-Src cDNA for mammalian cell expression (pMHHB5) was a gift from Dr. David Shalloway (Cornell University).
Immunoprecipitation and Western Blotting of Caveolin and Src-48 h post-transfection, one or two 10-cm dishes of 293T cells (ϳ1-2 ϫ 10 6 cells/dish) co-transfected with c-Src and caveolin were washed three times with PBS (1ϫ), collected, and incubated in lysis buffer (10 mM Tris, pH 7.5, 50 mM NaCl, 1% Triton X-100, 60 mM octyl glucoside) for 30 min at 4°C. Lysates were clarified by centrifugation at 15,000 ϫ g for 15 min and precleared by incubation with 0.1% albumin-coated protein A-Sepharose for 6 h at 4°C. After preclearing, supernatants were transferred to tubes containing mouse monoclonal anti-Src IgG prebound to protein A-Sepharose. After incubation rotating overnight at 4°C, immunoprecipitates were washed three times with lysis buffer and divided into three equal aliquots. One aliquot was subjected to an immune complex kinase assay (see below). The two other aliquots were subjected to immunoblot analysis with either a c-Src mAb probe or a caveolin mAb probe (clone 2297).
Immune Complex Kinase Assays-Anti-Src-immunoprecipitates from 293T cells co-transfected with c-Src and caveolin were washed twice with kinase reaction buffer (20 mM Hepes, pH 7.4, 5 mM MgCl 2 , 1 mM MnCl 2 ). The kinase reaction was initiated by the addition of 2 Ci of [␥-32 P]ATP. Kinase reactions were performed in a total of 50 l at 25°C. After incubation for 10 min at 25°C, immunoprecipitates were washed twice with lysis buffer at 4°C and eluted by addition of 2ϫ SDS-PAGE sample buffer and boiling for 2 min. Proteins were separated by SDS-PAGE and visualized by autoradiography with an intensifying screen.
Scanning Densitometry-Quantitation was performed as detailed previously (36). Briefly, phosphorylated protein bands were digitized by high resolution optical scanning; volumetric integration of signal intensities was carried out using ImageQuant Software (Fast Scan; Molecular Dynamics).
Immunofluorescence-All reactions were performed at room temperature. Transfected 293T cells were briefly washed three times with PBS and fixed for 20 min in PBS containing 4% paraformaldehyde. Fixed cells were rinsed with PBS and treated with 25 mM NH 4 Cl in PBS for 10 min to quench free aldehyde groups. Cells were then permeabilized with 0.1% Triton X-100 for 10 min and washed with PBS three times for 10 min each time. For double labeling, the cells were then successively incubated with PBS/3% bovine serum albumin containing: (i) 50 g/ml each of normal goat and donkey IgGs; (ii) a 1:400 dilution of mAb anti-c-Src and 40 g/ml anti-caveolin polyclonal IgG; and (iii) LRSC (lissamine rhodamine B sulfonyl chloride)-conjugated goat anti-mouse antibody (5 g/ml) and fluorescein isothiocyanate-conjugated donkey anti-rabbit antibody (5 g/ml). The first incubation was 30 min, whereas primary and secondary antibody reactions were 60 min each. Cells were washed three time with PBS between incubations. Slides were mounted with Slow-Fade anti-fade reagent and observed under a Bio-Rad MR600 confocal fluorescence microscope.
To test this hypothesis, we assessed the interaction of caveolin with Src family tyrosine kinases in vitro using (i) baculovirus-expressed Src proteins (c-Src and v-Src) and (ii) bacterially expressed caveolin fusion proteins. As shown in Fig. 1, both c-Src and v-Src were expressed well in baculovirus-infected Sf21 cells, and both migrated as a ϳ60-kDa band as expected. It should be noted that this band was not detectable in uninfected cells or in cells infected with vector alone (not shown). GST-caveolin fusion proteins were expressed and purified as we described previously (24,25,27).
As a first step, we examined the interaction of Src-tyrosine kinases with full-length caveolin affixed to glutathione-agarose beads (GST-FL-Cav, residues 1-178); GST alone served as a control for nonspecific binding. After binding, extensive washing, and elution with reduced glutathione, bound Src was visualized by immunoblot analysis. Fig. 1B shows that c-Src bound specifically to full-length caveolin but not to GST alone. In addition, v-Src failed to form a stable complex with caveolin under these binding conditions. This may reflect differences in the protein sequences of these two Src kinase, which diverge most extensively at their extreme C termini, resulting in the constitutive auto-activation of v-Src.
In order to define a minimal region of caveolin that can functionally support an association with c-Src, GST-caveolin fusion proteins encoding distinct portions of the N-or C-terminal domains of caveolin were used as the substrate for c-Src binding. Fig. 2 shows that only the GST fusion bearing caveolin residues 61-101 retained c-Src binding activity. However, similar fusion containing caveolin residues 1-81 did not show any interaction. This suggests that caveolin residues 82-101 are most critical for interaction with c-Src. This 20-amino acid stretch of caveolin residues is located within a membraneproximal region of the cytosolic N-terminal domain of caveolin. Similarly, this region of caveolin (residues 82-101) has been previously implicated in the direct interaction of caveolin with G ␣ subunits and H-Ras (19,27).
Caveolin Residues 82-101 Functionally Inhibit the Auto-ac- (Table I). Using purified recombinant c-Src tyrosine kinase, we evaluated the effects of these caveolin peptides on the functional auto-activation of c-Src kinase in vitro.
Auto-activation of c-Src occurs through auto-phosphorylation of tyrosine 416. Tyrosine 416 is located within a conserved "activation loop" that is located within the kinase domain of many distinct families of protein kinases (37,38). Tyrosine 416 is not phosphorylated when c-Src is inactive and is phosphorylated when c-Src is active (38,39). In addition, the Y416F mutation in activated forms of c-Src is sufficient to inhibit their transforming ability (40 -43).
Tyrosine 416 is also the major site of auto-phosphorylation within c-Src in vitro (39,44). Thus, in vitro auto-phosphorylation of c-Src serves as a measure of its auto-activation. Fig. 3 illustrates the effect of these caveolin peptides on the functional auto-activation of purified recombinant c-Src kinase. Peptide 1 (Fig. 3A) had no effect at any concentration, whereas peptide 2 (Fig. 3B) dose-dependently suppressed the auto-activation of c-Src kinase. Approximately 85% inhibition was observed at a peptide concentration of 300 nM and was completely abolished at 3 M. It should be noted that peptide 2 corresponds to the caveolin region (residues 82-101) that we have implicated in the direct interaction of caveolin with c-Src using caveolin expressed as a GST fusion protein (see Fig. 2 above).
Interestingly, when peptide 2 was divided into two peptides (peptides 3 and 4; Fig. 3C), no inhibition of c-Src auto-activation was observed, even at a peptide concentration of 10 M. Quantitation of these experiments (Fig. 3, A-C) is presented in Fig. 3D. These observations suggest that the intact domain (caveolin residues 82-101) is required for both structural and functional interaction with c-Src tyrosine kinase.
To test the generality of this phenomenon, we next assessed the effect of caveolin peptides on the auto-phosphorylation of a related Src family tyrosine kinase, Fyn. Fig. 4 shows that similar results were obtained with Fyn kinase. The inhibitory effect of peptide 2 was slightly less potent on Fyn tyrosine kinase. Approximately 90% inhibition was observed at a peptide concentration of 3 M; this is about 10-fold less potent than the effect we observed with c-Src. As with c-Src, peptide 1 had no effect on Fyn auto-phosphorylation, even at a peptide concentration of 10 M. The observed differences in the potency of peptide 2 with different Src family tyrosine kinases suggests that (i) the interaction of caveolin with Src family tyrosine kinases is highly specific and (ii) that caveolin may preferentially interact with a select subset of Src family members in vivo.
Co-expression with Caveolin Suppresses the Auto-activation of c-Src in Vivo-The experiments described above indicate that caveolin residues 82-101 interact directly with c-Src and functionally suppress auto-activation of c-Src in vitro. However, it is not clear from our experiments whether full-length caveolin can perform this function in vivo.
To test this idea under more physiological conditions, we transiently co-expressed c-Src with full-length caveolin in 293T cells. These cells express extremely low levels of endogenous caveolin; little or no immunoreactivity was observed with anticaveolin IgG by Western blotting (not shown). After expression, c-Src kinase was isolated by immunoprecipitation and subjected to the immune complex kinase assay (Fig. 5). Although immunoblot analysis indicates that these immunoprecipitates contain equivalent amounts of c-Src kinase, little or no kinase activity was associated with c-Src when co-expressed with caveolin in vivo. Thus, co-expression of caveolin with c-Src can abolish its capacity for auto-phosphorylation, as predicted based on our experiments with caveolin peptides (see Fig. 3).
FIG. 2. Defining a region of caveolin that contains c-Src binding activity.
A, diagrammatic summary of each GST-caveolin fusion protein relative to a complete caveolin molecule. The numbers at end points reflect their exact amino acid position within caveolin. These fusion proteins correspond to the full-length caveolin protein (residues 1-178), the C-terminal caveolin domain (residues 135-178), and various portions of the N-terminal domain of caveolin (residues 1-21, 1-41, 1-61, 1-81, or 61-101). These GST-caveolin fusion proteins were constructed and characterized as we described previously (24,25,27). B, molecular mapping of a caveolin region that interacts with c-Src. Using a panel of GST-caveolin fusion proteins enumerated in A, we systematically identified a 41-amino acid region of caveolin (residues 61-101) that is functionally sufficient to interact with c-Src. After binding, extensive washing, and elution with reduced glutathione, bound Src was visualized by immunoblot analysis as in Fig. 1B. Detergent extracts of insect cells recombinantly expressing c-Src were prepared from from a total of four t25 flasks containing ϳ1 ϫ 10 6 cells each. Equivalent amounts of GST and GST-caveolin fusion proteins were used in these binding experiments. These results imply that the interaction of caveolin with Src family kinases may serve to negatively regulate their functional activity. In addition, double labeling of 293T cells co-transfected with c-Src and caveolin revealed significant co-localization of these two distinct gene products (Fig. 6). This is consistent with results demonstrating co-immunoprecipitation of caveolin with c-Src using antibodies directed against c-Src (Fig. 5A, bottom panel).
Is Tyrosine Phosphorylation of Caveolin Required for the
Caveolin-mediated Inhibition of Src Family Kinases?-We
do not yet know the mechanism by which caveolin-derived peptides inhibit Src family kinases. One possibility is that this caveolin-mediated inhibition is related to tyrosine phosphorylation of caveolin or caveolin-derived peptides, resulting in a form of competitive substrate inhibition of Src family kinases. In support of this possibility, peptide 2 (DGIWKASFTTVTKY-WFYR; residues 82-101) contains both inhibitory activity and two tyrosine residues.
To directly examine the possible requirement for tyrosine Table I, were examined for their effect on the autophosphorylation of purified recombinant c-Src kinase in vitro. The effects of peptide 1 (A), peptide 2 (B), and peptides 3 and 4 (C) are shown. Note that peptide 1 had no effect, whereas peptide 2 dose-dependently suppressed the auto-phosphorylation of c-Src. Peptide 2 encodes residues 82-101 of caveolin that contain caveolin binding activity as we have shown in Fig. 2B using GST-caveolin fusion proteins. Peptides 3 and 4 correspond to the N-terminal and C-terminal halves of peptide 2. Note that peptides 3 and 4 have no effect, indicating that the complete 82-101 region of caveolin (peptide 2) is required for inhibiting the auto-activation of c-Src. Quantitation of these experiments is provided in D. Cumulative data are shown as the means Ϯ S.D. These experiments were performed at least three times independently in duplicate. phosphorylation in this event, we generated two mutated caveolin peptides in which both of these tyrosine residues were changed to alanine (DGIWKASFTTVTKAWFAR; termed Y 3 A) or phenylalanine (DGIWKASFTTVTKFWFFR; termed Y 3 F). Fig. 7 shows that both mutated caveolin peptides lacking tyrosine (Y 3 A; Y 3 F) were as effective or more effective than the wild-type peptide sequence in suppressing the auto-phosphorylation of c-Src. In addition, the mutant Y 3 F peptide was approximately twice as potent as the wild-type caveolin peptide. These results demonstrate that tyrosine phosphorylation of this caveolin sequence (residues 82-101) is not required for its inhibitory activity toward c-Src.
In accordance with these results, the Src-binding region we have defined here within caveolin (caveolin residues 61-101) is not a substrate for tyrosine phosphorylation by c-Src, as we have published previously (see Fig. 2 within Ref. 45). Also, the site of caveolin phosphorylation by c-Src occurs at a single tyrosine residue (tyrosine 14; Ref. 45), and this residue is outside caveolin's Src-binding region (caveolin residues 61-101). Thus, these experimental observations also argue against competitive substrate inhibition via tyrosine phosphorylation.
What about the Functional Activity of Other Caveolin Family FIG. 4. Effects of caveolin peptides on the auto-activation of Fyn, a closely related Src family tyrosine kinase. Caveolin peptides were examined for their effect on the autophosphorylation of purified recombinant Fyn kinase in vitro. The effects of peptide 1 (A) and peptide 2 (B) are shown. Note that peptide 1 had no effect, whereas peptide 2 dose-dependently suppressed the auto-activation of Fyn. Quantitation of these experiments is provided in C. Cumulative data are shown as the means Ϯ S.D. These experiments were performed at least three times independently in duplicate.
FIG. 5. In vivo effect of the full-length caveolin molecule on the auto-activation of c-Src tyrosine kinase revealed by co-expression of caveolin and c-Src in 293T cells.
A, 293T cells (ϳ1-2 ϫ 10 6 cells/10-cm dish) were transfected with c-Src alone or co-transfected with c-Src plus caveolin. 48 h post-transfection, cells were washed and collected in lysis buffer. Cell lysates were immunoprecipitated with anti-Src IgG bound to protein A-Sepharose. These immunoprecipitates were subjected to immunoblot analysis with a Src mAb probe (top panel), an immune complex kinase assay to detect Src auto-phosphorylation (middle panel), and immunoblot analysis with a caveolin mAb probe (bottom panel). Note that although both immunoprecipitates contain equivalent amounts of c-Src (top panel), co-expression with caveolin prevents the auto-phosphorylation of c-Src (middle panel). Also, caveolin co-immunoprecipitates with c-Src when using antibodies directed against c-Src (bottom panel). One 10-cm dish was used per immunoprecipitation. B, quantitation of A (middle panel). The autophosphorylation of c-Src tyrosine kinase is expressed in arbitrary units.
Members?-Recently, two novel caveolin-related proteins have been identified and cloned. These proteins, termed caveolin-2 and caveolin-3, are the products of separate caveolin genes (46 -48). Thus, caveolin (also known as caveolin-1) is the first member of a multi-gene family (46).
Caveolins-1, -2, and -3 are structurally homologous proteins but are immunologically distinct molecules; they have different but overlapping tissue distributions (20, 46 -48). For example, the expression of caveolin-3 is absolutely muscle-specific (skeletal and cardiac muscle cells) (20,47). Caveolin-1 is not expressed in these striated muscle tissues, but smooth muscle cells co-express caveolins-1 and -3 (20). Furthermore, caveolin-1 and -2 are co-expressed in adipocytes and share the same overlapping tissue distribution (46). Thus, a given mammalian cell, such as smooth muscle cells or fibroblasts, may co-express up to three or four immunologically distinct caveolin protein products.
As a consequence of these findings, we also evaluated the effects of peptides derived from caveolins-2 and -3 that correspond to the 82-101 region in caveolin-1. Peptides derived from all three caveolins contain two conserved tyrosine residues (marked by arrows in Fig. 8A) and are extremely homologous. Fig. 8 shows that only the peptides derived from caveolins-1 and -3 exhibited inhibitory effects; the caveolin-2-derived peptide had no inhibitory effect.
This differential effect was also observed when these three peptides were used to evaluate their effects on the functional activity of hetero-trimeric G-proteins. More specifically, peptides derived from caveolins-1 and -3 both dose-dependently suppressed the GTPase activity of purified heterotrimeric Gproteins, whereas the caveolin-2-derived peptide did not possess this inhibitory activity (27,46,47). Thus, our previous results with G-proteins directly parallel our current results with Src family tyrosine kinases. DISCUSSION Protein-tyrosine kinases play an essential role in the regulation of cellular growth control and differentiation. Two general classes of tyrosine kinases have been defined: transmembrane receptors and nonreceptor cytoplasmically oriented kinases (49). Receptor tyrosine kinases are activated by the binding of ligand to the extracellular protein domain, whereas non-receptor tyrosine kinases must be activated indirectly. Both classes of tyrosine kinases can function as transforming oncogenes when mutationally activated (50,51).
c-Src is the prototype of a family of non-receptor tyrosine kinases. v-Src, the transforming component of the Rous sarcoma virus, is derived from c-Src, a normal cellular gene. These two gene products differ primarily in their extreme C-terminal regions (38,52,53). Mutationally activated c-Src has been implicated in the pathogenesis of carcinomas, including breast tumors (54). In contrast, recombinant overexpression of wildtype c-Src does not result in efficient cell transformation or increased tyrosine phosphorylation, indicating that the tyrosine kinase activity of c-Src is normally repressed (37,55). However, the tightly regulated tyrosine kinase activity of c-Src can be activated transiently by a number of growth factors (38,56).
To date, a total of nine distinct members of the Src family of tyrosine kinases have been identified and cloned (57). c-Src and all other family members contain five conserved functional domains (38). (i) The extreme N terminus contains a consensus sequence for lipid modification by addition of myrisate, palmitate, or both fatty acyl moieties (58 -60). (ii) and (iii) The SH2 and SH3 domains function in protein-protein interactions. The SH2 domain recognizes specific phosphotyrosine based motifs (61), whereas the SH3 domain binds proline-rich peptide sequences with the consensus PXXP (62)(63)(64). (iv) The SH1 domain (or catalytic domain) functions in the recognition and tyrosine phosphorylation of specific substrates (65). (v) Finally, the extreme C terminus contains a site of negative regulation, Tyr 527 (37,55). As a consequence of C-terminal divergence, v-Src does not contain Tyr 527 , which when phosphorylated can act to functionally inactivate c-Src kinase (38).
It is now well established that tyrosine phosphorylation of c-Src itself plays a major role in controlling its intrinsic kinase activity. Autophosphorylation of Tyr 416 located within the "activation domain" of the kinase domain results in activation of the enzyme, whereas phosphorylation of Tyr 527 results in kinase inactivation (38,66). Phosphorylation of Tyr 527 is thought to be mediated by a C-terminal Src kinase (Csk) (67)(68)(69)(70). After phosphorylation of Tyr 527 , c-Src is thought to "fold up" by the intramolecular interaction of Tyr 527 with the SH2-domain, resulting in an inactive enzyme conformation (37, although this model remains speculative. Because elevated Src tyrosine kinase activity is associated with cellular transformation, many investigators have FIG. 6. Localization of c-Src and caveolin within a single cell. 293T cells were co-transfected with c-Src and untagged mammalian caveolin. c-Src expression was detected with a mouse mAb that recognizes a conserved N-terminal epitope; caveolin expression was detected using rabbit polyclonal IgG directed against caveolin. Control experiments using singly transfected populations of cells confirmed the specificity of these antibodies; no cross-reaction was observed (not shown). Transfected cells expressing both c-Src and caveolin were selected for imaging by laser confocal fluorescence microscopy. Primary antibodies were detected using distinctly tagged fluorescent secondary antibodies: A, rhodamineconjugated for c-Src; B, fluorescein-conjugated for caveolin; and C, superposition of the fluorescent images of A and B. Note that co-localization of c-Src and caveolin generates a yellow fluorescent image.
searched for mechanisms to inactivate the kinase activity of c-Src. For example, Shoelson and colleagues have shown that high concentrations of peptide substrates inhibited the autophosphorylation of c-Src kinase (71). Similarly, Shalloway and colleagues found that auto-phosphorylation of the c-Src is an intermolecular process using phosphopeptides as a probe (72). However, in both reports, the investigators used much higher concentrations of peptides, such as 0.24 mM or even 8.5 mM. In contrast, here we have found that only 300 nM of caveolin peptide 2 (caveolin residues 81-101) strongly inhibited c-Src auto-phosphorylation; similar results were obtained with Fyn, a related Src-like kinase. Thus, our current studies imply that caveolin is a potent inhibitor of c-Src and perhaps other Src family tyrosine kinases. In addition, our results indicate that caveolin preferentially interacts with c-Src rather than v-Src. These observations suggest that caveolin better recognizes the inactive conformation of c-Src, and thus, c-Src tyrosine kinase activity could be normally repressed by an interaction with caveolin.
How does caveolin binding negatively regulate c-Src autoactivation? The functional existence of such a negative regulator or suppressor of c-Src has been suggested by earlier studies (37). Because mutations in the extreme C terminus, the kinase domain, the SH2 domain, or the SH3 domain can constitutively activate c-Src, this activation may be mediated a conformational change in one of these regions (38). Such a conformational change could also be induced normally by unknown allosteric effectors. Putative allosteric activators could stabilize c-Src in the activated state, whereas putative allosteric inhibitors could hold c-Src in an inactive conformation (37). Regarding allosteric inhibitors, they could act by shifting the equilibrium in favor of the inactive conformation, perhaps by masking Tyr 416 from kinases or Tyr 527 from phosphatases (37,38). Be-cause caveolin preferentially interacts with inactive c-Src (but not v-Src) and a caveolin-derived peptide or caveolin co-expression can inhibit auto-activation of c-Src, caveolin could act as an allosteric inhibitor of c-Src and other Src family kinases.
The subcellular distribution of several signaling molecules is restricted by association with scaffolding proteins (28). These scaffolding proteins, Ste5p (73,74), AKAP-79 (29,30,75,76), and 14 -3-3 (77-80) may simultaneously associate with distinct classes of signaling proteins to form a signaling pathway or module. So far, the accumulated evidence suggests that caveolin possesses all the qualities of a "classic" scaffolding protein because caveolin forms multivalent homo-oligomers and each caveolin-interacting protein binds to the same cytosolic membrane-proximal region of caveolin. As such, caveolin may provide a selective framework that segregates one group of signaling events from the next by preventing cross-talk between functionally unrelated signaling modules while facilitating cross-talk between related signaling modules.
Because a discrete cytosolic domain of caveolin (residues 82-101) (i) directly interacts with wild-type G ␣ subunits, H-Ras and c-Src; (ii) fails to interact with mutationally activated G ␣ subunits (G s ; Q227L), H-Ras (G12V), and Src (v-Src); (iii) functionally suppresses the activity of both G-proteins and Src family tyrosine kinases, holding them in the inactive conformation; and (iv) is involved in the formation of multivalent caveolin homo-oligomers, we propose the term caveolin-scaffolding domain be used when describing this region of caveolin. FIG. 7. Effects of mutated caveolin peptides on the auto-activation of c-Src tyrosine kinase. Wild-type and mutant caveolin peptides detailed in A were examined for their effect on the autophosphorylation of purified recombinant c-Src kinase in vitro. The effects of these peptides are shown in B. All peptides were added at a concentration of 1 M. Note that both mutated caveolin peptides lacking tyrosine residues (Y3 A and Y3 F) were as effective or more effective than the wild-type (WT; peptide 2, residues 82-101) peptide sequence in suppressing the auto-phosphorylation of c-Src. The Y 3 F peptide was approximately twice as potent as the wild-type caveolin peptide.
FIG. 8. Differential effect of peptides derived from caveolins-1, -2, and -3 on the auto-activation of c-Src tyrosine kinase. Recently, two caveolin-related proteins (caveolin-2 and caveolin-3) have been identified and cloned; caveolin has been retermed caveolin-1. As a consequence, we also evaluated the effects of peptides derived from caveolins-2 and -3 that correspond to the 82-101 region in caveolin-1; the sequences of these peptides are detailed in A. All three peptides contain two conserved tyrosine residues (marked by arrows) and are extremely homologous. The effects of these peptides are shown in B. All peptides were added at a concentration of 1 M. Note that only peptides derived from caveolins-1 and -3 exhibited inhibitory effects; the caveolin-2-derived peptide had no inhibitory effect. dren's Hospital/Harvard Medical School) for critical reading of the manuscript. | 2019-04-02T13:04:32.424Z | 1996-11-15T00:00:00.000 | {
"year": 1996,
"sha1": "734e0e7d1c5e5c5e3733739df989c522b8dda5e4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/46/29182.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "a65facd640d48348c1932f023ab6e9932fd38d64",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
236332633 | pes2o/s2orc | v3-fos-license | Estimation of Outdoor PM 2.5 Infiltration into Multifamily Homes Depending on Building Characteristics Using Regression Models
: The purpose of this study was to evaluate outdoor PM 2.5 infiltration into multifamily homes according to the building characteristics using regression models. Field test results from 23 multifamily homes were analyzed to investigate the infiltration factor and building characteristics including floor area, volume, outer surface area, building age, and airtightness. Correlation and regression analysis were then conducted to identify the building factor that is most strongly associated with the infiltration of outdoor PM 2.5 . The field tests revealed that the average PM 2.5 infiltration factor was 0.71 ( ± 0.19). The correlation analysis of the building characteristics and PM 2.5 infiltration factor revealed that building airtightness metrics (ACH 50 , ELA/FA, and NL) had a statistically significant ( p < 0.05) positive correlation ( r = 0.70, 0.69, and 0.68, respectively) with the infiltration factor. Following the correlation analysis, a regression model for predicting PM 2.5 infiltration based on the ACH 50 airtightness index was proposed. The study confirmed that the outdoor-origin PM 2.5 concentration in sufficiently leaky units could be up to 1.59 times higher than that in airtight units.
Introduction
Outdoor PM 2.5 is known to cause respiratory and cardiovascular diseases when the human body is exposed to it for long periods [1,2], and it is classified as a Group 1 carcinogen by the International Agency for Research on Cancer (IARC) under the World Health Organization (WHO). Accordingly, many countries have proposed national countermeasures against outdoor PM 2.5 and have established standards intended to reduce the damage caused by exposure to PM 2.5 . The U.S. Environmental Protection Agency (EPA) established the National Ambient Air Quality Standards for PM 2.5 in 1997 and then in 2012 reinforced the standards at a mean level of 35 µg/m 3 per 24 h. The State Council of the People's Republic of China suggested the Air Pollution Prevention and Control Action Plan in 2013 [3]. The Ministry of Environment (MOE) in Korea presented the High Concentration Fine Particle Response Manual for vulnerable groups in 2017. Action levels for outdoor PM 2.5 , which is known to have a large impact on the human body, have been in force in Korea since 2015.
Nevertheless, outdoor PM 2.5 can infiltrate the indoors through cracks in buildings, even under non-ventilated conditions, thereby affecting indoor PM 2.5 concentrations [4,5]. As outdoor-and indoor-origin PM 2.5 differ in composition, formation, and toxicity [6,7], it is necessary to evaluate their concentrations separately in order to establish management strategies for reducing indoor PM 2.5 concentrations. Moreover, since outdoor-origin PM 2.5 consists of air pollutants such as nitrates, sulfates, and carbon compounds, it is known Sustainability 2021, 13, 5708 2 of 14 to have higher health risks than indoor-origin PM 2.5 [8]. Accordingly, it is important to evaluate the infiltration of outdoor PM 2.5 when managing indoor PM 2.5 .
Several studies have evaluated the infiltration of outdoor PM 2.5 into indoors. Existing studies have suggested a relationship between indoor and outdoor PM 2.5 concentrations through the calculation of the indoor-outdoor concentration ratio (I/O ratio) in residential buildings [9][10][11][12]. In the measurements for occupied buildings, the average I/O ratio was found to be in the range of 0.61 to 1.00, which indicates that the outdoor PM 2.5 concentration affects the indoor PM 2.5 concentration. Several studies have been conducted to assess the infiltration of outdoor PM 2.5 with infiltration factors [13][14][15][16][17][18]. The PM 2.5 infiltration factor is an indicator of the equilibrium fraction of outdoor PM 2.5 that penetrates and becomes suspended indoors. In these studies, the infiltration factor ranged from 0.35 to 0.66, from which it can be estimated that the indoor PM 2.5 concentration in residential buildings is 35-66% of the outdoor PM 2.5 concentration. The results show that the impact of outdoororigin PM 2.5 on indoor concentrations may vary according to the building characteristics.
Infiltration of outdoor PM 2.5 depends on building characteristics such as the building size (floor area and volume of room), year of construction, and airtightness [18][19][20]. In addition, environmental conditions such as temperature and pressure differences between the indoors and outdoors can affect the amount of outdoor-origin PM 2.5 reaching the indoors [18][19][20][21]. Stephens and Siegel [22] conducted infiltration tests of ultrafine particles (20-1000 nm in diameter) in 18 detached homes in the U.S. to analyze the correlation between various building characteristics and the outdoor source of ultrafine particles. In their study, environmental conditions, including indoor-outdoor pressure differences, differed by testing unit when conducting the infiltration test. They found a limit at which the impact of the environmental conditions was reflected in the assessment of the infiltration according to the building characteristics. Unlike the detached houses studied in previous research, according to the 2015 Population and Housing Census of Korea [23], 77.2% of residential buildings in Korea are multifamily homes, most of which are high-rise buildings of 15 or more stories. Accordingly, the PM 2.5 infiltration is expected to vary due to the differences in building characteristics. To establish targeted management strategies for reducing indoor PM 2.5 in diverse multifamily housing units in Korea, it is necessary to identify the impact of the dominant building factors on the infiltration of outdoor PM 2.5 .
This study aimed to estimate the outdoor PM 2.5 infiltration of multifamily homes depending on the building characteristics. Field test results for 23 multifamily homes were analyzed to investigate the infiltration factor and building characteristics including the floor area, volume, outer surface area, building age, and airtightness. Subsequently, regression analysis was conducted to identify the dominant building factors influencing infiltration of outdoor PM 2.5 . To minimize the impact of environmental disturbances, the blower-door depressurization procedure [18,24], which enables the maintenance of an identical indooroutdoor pressure difference for each test housing unit, was utilized to conduct the PM 2.5 infiltration test. Based on the correlation analysis results, a regression model for predicting PM 2.5 infiltration according to the ACH 50 airtightness index is proposed.
Analysis Units
The analysis units consisted of a total of 23 domestic homes. These homes had reinforced concrete structures with layouts including living rooms, kitchens, and toilets and had various building characteristics. They included 12 units being tested for the first time and 11 units that had been previously investigated in a study by Choi and Kang [18]. Among the building characteristics of the analysis units, the construction year, floor area, and window area were obtained through on-site investigation and are listed in Table 1. The average building age was 13.6 years, with a minimum of 1 year and a maximum of 38 years. The floor area ranged from 14 m 2 to a maximum of 212 m 2 , with an average of 57.4 m 2 . In terms of floor area, both small and large units were thus included in the experiment. To analyze the correlation between building factors and outdoor PM 2.5 Sustainability 2021, 13, 5708 3 of 14 infiltration, field tests were conducted to measure the airtightness of the buildings and the PM 2.5 infiltration factor.
Airtightness Test
To measure the airtightness of the test homes, the fan pressurization method was applied in compliance with ISO 9972 [25]. The airtightness of the buildings was calculated using the fan pressurization method based on the air flow rate generated by the fan to determine the indoor-outdoor pressure difference for five points between 10 and 60 Pa. The indoor-outdoor pressure difference and the resulting air flow rate can be explained by the power law in Equation (1), and the trend line, which is found by interpolating the measured values with a straight line, can be used to obtain the air leakage coefficient (C) and the pressure exponent. C depends on the leakage characteristics of the building; n is a value between 0.5 and 1: it is close to 0.5 when the inflow air is turbulent and close to 1.0 when it is laminar. The power law is where Q is the air leakage rate through the building envelope (m 3 · h −1 ), C is the air leakage coefficient (m 3 · h −1 · Pa −n ), ∆P is the induced pressure difference (Pa), and n is the pressure exponent (dimensionless). ACH 50 (the air change rate at 50 Pa), which is used as a performance indicator of airtightness, can be calculated using the ratio of the air flow rate to the volume of the room, while maintaining the indoor-outdoor pressure difference at 50 Pa through Equation (2). The effective leakage area (ELA) of the units when the pressure difference between the indoors and outdoors is 4 Pa can be calculated using Equation (3). Since the ELA of each unit depends on the size of the unit, the specific ELA, which distributes the ELA over the floor area, was also calculated. The normalized leakage (NL), which allows for comparison of the airtightness between units by accounting for their floor area and height, is calculated by Equation (4) using the ELA, floor area, and floor height: where Q 50 is the air flow rate through the building envelope under a pressure difference of 50 Pa(m 3 · h −1 ), C is the air flow coefficient (m 3 · h −1 · Pa −n ), ∆P r is the reference pressure difference (Pa), n is the air flow exponent (dimensionless), ρ is the air density (kg · m −3 ), A f is the floor area (m 2 ), and H is the floor height (m). In this study, Retrotec EU6101 with DM32 (USA) was used as the measurement equipment for the fan pressurization method; the measurement error of the wind volume was ±5%. To prevent measurement errors caused by indoor-outdoor pressure differences, the measurement conditions proposed in ISO 9972 were employed, that is, a wind speed of less than 6 m/s and natural conditions with an indoor-outdoor pressure difference of 5 Pa or more. Assuming a single-zone target unit, the interior doors were kept open during the measurement of the blower door, and the air flow rate generated by the fan was measured to create outdoor pressure difference conditions of 10, 20, 30, 40, and 50 Pa. Based on the measurement results of the blower door, the following airtightness indicators were derived: C (leakage coefficient), n (pressure exponent), ACH 50 , ELA (effective leakage area), specific ELA, and NL (normalized leakage). To classify the analysis units by airtightness level, the leakage class was determined according to the airtightness and ventilation requirements presented by ASHRAE 119 [26], as shown in Table 2.
PM 2.5 Infiltration Test
To analyze the effects of building factors on the infiltration of outdoor PM 2.5 , a PM 2.5 infiltration test was conducted using the blower-door depressurization method [18], which enables the assessment of outdoor PM 2.5 infiltration under controlled pressure differences. The main strategy of the blower-door depressurization method is to use a blower door to fix the indoor-outdoor pressure difference at 10 Pa and then to measure the indoor and outdoor PM 2.5 concentrations. To obtain the indoor PM 2.5 concentration after the infiltrated outdoor-origin PM 2.5 had been fully mixed into the indoor air, the indoor and outdoor PM 2.5 concentration measurements were obtained after operating the blower door for more than one time constant to entirely replace the room air under the controlled indoor-outdoor pressure difference of 10 Pa.
Under natural conditions, the difference between the indoor and outdoor pressures of a building is generally known to be 4 Pa [27]. In this study, the pressure difference was limited to 10 Pa through the blower door to enable the comparison of the buildingspecific infiltration factor. This is the minimum recommended pressure difference at which the flow rate is controlled during the blower-door experiment [27], and it is an indooroutdoor pressure difference that can be found in mid-and high-rise buildings or that can be caused by external winds in winter [28,29]. Based on the living environment in Korea, where the proportion of high-rise multifamily housing units is high, a pressure difference of 10 Pa is therefore judged as suitable for simulating the natural infiltration environment in middle-and high-rise units. Although low indoor-outdoor pressure differences can cause the measured PM 2.5 infiltration factor to be slightly higher than the actual PM 2.5 infiltration factor, this study included an infiltration experiment under the same environmental conditions to select the dominant building factors for outdoor PM 2.5 infiltration through comparison of the units and then evaluated the PM 2.5 infiltration level.
In this study, the PM 2.5 infiltration factor as an indicator of outdoor PM 2.5 infiltration was calculated using the indoor PM 2.5 mass balance equation. Equation (5) is the indoor PM 2.5 mass balance equation; it is composed of the outdoor PM 2.5 infiltration, indoor PM 2.5 generation, and deposition, resuspension, removal, and exfiltration terms: Sustainability 2021, 13, 5708 (5) where V is the volume of the room (m 3 ), C in is the indoor PM 2.5 concentration (µg · m −3 ), C out is the outdoor PM 2.5 concentration (µg · m −3 ), P is the PM 2.5 penetration coefficient (dimensionless), λ is the air change rate (h −1 ), K is the PM 2.5 deposition rate (h −1 ), E is the indoor PM 2.5 emission rate (µg · h −1 ), R resus is the PM 2.5 resuspension rate (µg · h −1 ), and R rem is the PM 2.5 removal rate (µg · h −1 ). The change in indoor PM 2.5 concentration is expressed by Equation (6) with the assumption that there is no indoor PM 2.5 generation source, resuspension, or removal. The indoor PM 2.5 concentration can be expressed by Equation (7) when the indoor fine dust concentration reaches a steady-state, at which point the PM 2.5 infiltration factor (F in ) can be obtained as the ratio of the indoor and outdoor PM 2.5 concentrations in the steady-state, as shown in Equation (8): where V is the volume of the room (m 3 ), C in is the indoor PM 2.5 concentration (µg · m −3 ), C out is the outdoor PM 2.5 concentration (µg · m −3 ), P is the PM 2.5 penetration coefficient (dimensionless), ACH 10 is the air change rate at 10 Pa (h −1 ), K is the PM 2.5 deposition rate (h −1 ), C in,ss is the indoor PM 2.5 concentration at steady-state (µg · m −3 ), and C out,ss is the outdoor PM 2.5 concentration at steady-state (µg · m −3 ). To conduct the PM 2.5 infiltration test using the fan pressurization method, Retrotec EU6101 with DM32 (USA) was used for the blower door, and a light-scattering-type AM510 (TSI, Shoreview, MN, USA), which has been used for continuous measurement of PM 2.5 concentration in previous studies [30,31], was used for the measurements. The measurement error of the PM 2.5 concentration was 1 µg/m 3 over 24 h. At a measurement interval of 3 min, the indoor and outdoor PM 2.5 concentrations were measured at one point in the center of the unit and at one point in the outdoor area close to the unit. To prevent the resuspension of indoor PM 2.5 caused by air flow through the blower door, cleaning was carried out to remove indoor PM 2.5 sources before the measurements, and the measurements were conducted in the absence of indoor PM 2.5 sources or resuspension activities in the room. The PM 2.5 concentration was obtained after one time constant at a 10 Pa pressure difference at the steady-state of the indoor PM 2.5 concentration, and the infiltration factor of PM 2.5 was calculated using Equation (8).
The PM 2.5 infiltration test with the blower-door depressurization procedure was conducted to minimize the impact of environmental factors on the outdoor PM 2.5 infiltration when comparing the PM 2.5 infiltration factors of multifamily homes according to their building characteristics. Nevertheless, as the factors affecting the outdoor PM 2.5 infiltration, the outdoor PM 2.5 concentration conditions varied at the time of the measurements. When the outdoor PM 2.5 concentration is low, the margin of error in the calculation of the PM 2.5 infiltration factor may even increase to the level of the device measurement error (1 µg/m 3 ) due to the small difference between the indoor and outdoor PM 2.5 concentrations. When the outdoor PM 2.5 concentration changes drastically, the infiltration factor may be overestimated or underestimated depending on the pattern of change. The analysis was thus performed by classifying the outdoor PM 2.5 concentration and its fluctuations as they are expected to affect the outdoor PM 2.5 infiltration (Table 3). OPC-1 denotes the combination of concentrations that exceed the "bad" level of a daily average of 35 µg/m 3 presented by the MOE in Korea and the U.S. EPA with low fluctuation, i.e., measurements with a deviation of less than 10% of the average outdoor PM 2.5 concentration, and this case was adopted for statistical analysis. Moreover, based on the measurement results for OPC-2, which includes average outdoor PM 2.5 concentrations below 35 µg/m 3 , and OPC-3, which includes outdoor PM 2.5 concentration deviations of 10% or more than the average, trends in the measurement results were investigated according to the outdoor PM 2.5 concentration conditions.
Regression Analysis
To determine the building factors that have a dominant influence on outdoor PM 2.5 infiltration, an analysis of the correlation between the building factors and PM 2.5 infiltration factor was performed. Prior to the correlation analysis, tests of normality (the Kolmogorov-Smirnov test and Shapiro-Wilk test) were applied to the measurement data to test the validity of normal distribution between the continuous variables. Subsequently, the Pearson correlation coefficient (r) was calculated to determine the strength of the linear relationship between the variables, and p-values were calculated to evaluate the statistical significance of the relationship between the building factors and outdoor PM 2.5 infiltration. For the statistical analysis, we utilized the Statistics and Machine Learning Toolbox in MATLAB. Linear regression analysis was performed with the PM 2.5 infiltration factor as the dependent variable to produce an equation that describes the PM 2.5 infiltration factor in terms of the dominant building factor that was derived from the correlation results.
Pearson's Correlation Coefficient
The Pearson's correlation coefficient is a statistic that quantifies the linear relationship between two variables; the coefficient of correlation (r xy ) between variables x and y can be calculated using Equation (9). r xy is in the range [−1, 1]: the closer its absolute value is to 1, the stronger the correlation is; if it is greater than 0.7 in absolute value, the correlation is said to be strong. The statistical significance of the correlation can be tested by a t-test, and the correlation can be considered statistically significant when the p-value is less than 0.05.
where x is the mean of x, and y is the mean of y.
Regression Model
Regression analysis is a method for numerically modeling the relationship between independent and dependent variables and is based on the method of least squares. A model is selected when the sum of the squared residuals between the linear model and the observations is minimized. Regression models have the advantage of being able to quantify the relationship between the independent variables and the dependent variable and facilitate the intuitive interpretation of relationships among factors, making them widely used for the evaluation of explanatory objective variables in existing studies [32,33].
In this study, a regression model was used to evaluate outdoor PM 2.5 infiltration based on the selected building factors. To select a suitable model to describe the relationships between the variables, four types of linear regression (linear, log-linear, linear-log, and Sustainability 2021, 13, 5708 7 of 14 log-log regression), including log-transformation models that can explain nonlinear relationships between variables based on their log transformation, were conducted. The coefficient of determination (R 2 ) (Equation (9)) was used as an indicator to evaluate the ability of each regression model to explain the measured values. R 2 falls in the range [0, 1]; and the closer it is to 1, the better the regression model describes the measurements: where Y i is the i-th measured value, Y is the mean of the measured values, andŶ i is the i-th predicted value in the regression model. Table 4 presents the airtightness measurements obtained for the analysis units. The ELA was found to range from 8 cm 2 to 435 cm 2 . The PM 2.5 infiltration was expected to vary depending on the leakage area, which serves as the infiltration path for PM 2.5 under the reference differential pressure condition (4 Pa). The ratio of ELA to the floor area (ELA/FA) was calculated to control for the difference in ELA due to the varying size of the analysis units: it had a range of 0.47 cm 2 /m 2 to 7.65 cm 2 /m 2 . The average ACH 50 was found to be 7.0 (±3.9) h −1 , with a minimum of 1.4 h −1 and a maximum of 15.0 h −1 , which are similar to the results of previous studies (1.9 and 12.9 h −1 , respectively) [34][35][36][37][38] that investigated the ACH 50 of Korean multifamily homes. We found that the leakage classes of the multifamily homes, calculated based on the ACH 50 and NL in the analysis units, include a wide range of airtightness: from A (sufficiently tight) to G (highly leaky). Table 5 presents the results of the infiltration tests in the multifamily homes. The outdoor PM 2.5 concentration at steady-state (C out,ss ) and indoor PM 2.5 concentration at steady-state (C in,ss ) were measured to calculate the PM 2.5 infiltration factor. The deviation of C out,ss and C in,ss was within 5% of the measured mean value, indicating that the steadystate assumption was satisfied in the calculation of the infiltration factor. The PM 2.5 infiltration factor was shown to range from 0.31 to 1.12, with an average of 0.71 (± 0.19), under an indoor-outdoor pressure difference of 10 Pa. This suggests that when there is no indoor generating source, the indoor PM 2.5 concentration is about 71% of the outdoor PM 2.5 concentration. The PM 2.5 infiltration factors (0.31 to 1.12) measured in this study were similar to or higher than those found in previous studies [13,16,17], which had an average of 0.55 to 0.66 for residential buildings. When analyzing the correlation between F in and the building factors, the measurement results were classified as OPC-1, OPC-2, or OPC-3 to reflect the level and variability of the outdoor PM 2.5 concentration. Three units (Unit 2, 9, 16) were categorized as OPC-2, four units (Unit 6, 10, 19, 22) as OPC-3, and sixteen units (Unit 1, 3,4,5,7,8,11,12,13,14,15,17,18,20,21,23) as OPC-1. To avoid the margin of error factor caused by the condition of outdoor PM 2.5 concentration when calculating the PM 2.5 infiltration factor and to increase the accuracy of the analysis, the measurement results for OPC-1 were used to analyze the correlation between the outdoor PM 2.5 infiltration factor and building factors. Figure 1 shows the distribution of the PM 2.5 infiltration factor measurements for the OPC-1 units. The PM 2.5 infiltration factor averaged 0.68 ± 0.15 h −1 , with a range of 0.36 h −1 to 0.91 h −1 . To determine whether the PM 2.5 infiltration factor measurements are suitable for the analysis of the Pearson's correlation with the building characteristics, the distribution of the measurements for the units in the OPC-1 category was plotted: the measurements exhibited a roughly linear relationship with the quantiles of the normal distribution (Figure 1b). Tests of normality, namely the Kolmogorov-Smirnov test and Shapiro-Wilk test, were applied to measurements, the results of which are listed in Table 6: both test results confirm that the t-values are within the significance level (p > 0.05) and that there is not sufficient evidence that the infiltration factor measurements of the OPC-1 group do not follow a normal distribution. Accordingly, the measured PM 2.5 infiltration factors for the OPC-1 group are judged to be suitable for linear correlation analysis. Table 7 lists the correlations between the PM 2.5 infiltration factor and the building characteristics, and Table 8 ranks the dominant building factors in terms of correlation and statistical significance. The correlation coefficients (r) of the airtightness metrics (ACH 50 , NL, and ELA/FA) and the PM 2.5 infiltration factor were 0.701, 0.685, and 0.684, respectively, with p-values of less than 0.01; that is, there was a strong, positive correlation that is statistically significant. The outdoor PM 2.5 infiltration is thus proportional to the airtightness of the building, and the relationship between the two can be explained through a linear model. In addition to airtightness, in the order of decreasing strength, the building characteristics found to be highly correlated with the PM 2.5 infiltration factor are WA/FA, volume, floor area, construction year, and EWA/FA. WA/FA, volume, and floor area are related to the size of the building and were found to be negatively correlated with the PM 2.5 infiltration factor, with coefficients of −0.489, −0.366, and −0.362, respectively. Although the PM 2.5 infiltration factor tended to be higher in smaller units, the correlations were not statistically significant (p-value ≥ 0.05). We thus conclude that the negative correlation between building size and PM 2.5 infiltration factor is less descriptive of their relationship and that additional data are needed. The year of construction and EWA/FA had low positive correlations with the PM 2.5 infiltration factor, and the correlations were not statistically significant. ELA/FA showed a strong, positive correlation with the PM 2.5 infiltration factor within statistical significance rather than EWA/FA and WA/FA. This result implies that outdoor PM 2.5 infiltration could depend on the leakage area of the building facade which may differ with the materials or construction of the building, rather than the size of the building facades.
Correlation Between the PM 2.5 Infiltration Factor and Building Factors
The correlation between the building factors was also calculated: the correlations between the year of construction and the airtightness metrics (ELA/FA, ACH 50 , and NL) were 0.604, 0.561, and 0.598, respectively, i.e., a moderate positive correlation that was statistically significant (p-value < 0.05). This may be attributable to increased airtightness in newly built multifamily homes for the purpose of saving energy. Based on the relationship between airtightness and the year of construction, the correlation between the outdoor PM 2.5 infiltration factor and year of construction can be derived without any field tests and can be further investigated through more data collection.
The airtightness metrics (ACH 50 , NL, EL, and ELA/FA) were selected as the domi- Table 7 lists the correlations between the PM 2.5 infiltration factor and the building characteristics, and Table 8 ranks the dominant building factors in terms of correlation and statistical significance. The correlation coefficients (r) of the airtightness metrics (ACH 50 , NL, and ELA/FA) and the PM 2.5 infiltration factor were 0.701, 0.685, and 0.684, respectively, with p-values of less than 0.01; that is, there was a strong, positive correlation that is statistically significant. The outdoor PM 2.5 infiltration is thus proportional to the airtightness of the building, and the relationship between the two can be explained through a linear model. In addition to airtightness, in the order of decreasing strength, the building characteristics found to be highly correlated with the PM 2.5 infiltration factor are WA/FA, volume, floor area, construction year, and EWA/FA. WA/FA, volume, and floor area are related to the size of the building and were found to be negatively correlated with the PM 2.5 infiltration factor, with coefficients of −0.489, −0.366, and −0.362, respectively. Although the PM 2.5 infiltration factor tended to be higher in smaller units, the correlations were not statistically significant (p-value ≥ 0.05). We thus conclude that the negative correlation between building size and PM 2.5 infiltration factor is less descriptive of their relationship and that additional data are needed. The year of construction and EWA/FA had low positive correlations with the PM 2.5 infiltration factor, and the correlations were not statistically significant. ELA/FA showed a strong, positive correlation with the PM 2.5 infiltration factor within statistical significance rather than EWA/FA and WA/FA. This result implies that outdoor PM 2.5 infiltration could depend on the leakage area of the building facade which may differ with the materials or construction of the building, rather than the size of the building facades. The correlation between the building factors was also calculated: the correlations between the year of construction and the airtightness metrics (ELA/FA, ACH 50 , and NL) were 0.604, 0.561, and 0.598, respectively, i.e., a moderate positive correlation that was statistically significant (p-value < 0.05). This may be attributable to increased airtightness in newly built multifamily homes for the purpose of saving energy. Based on the relationship between airtightness and the year of construction, the correlation between the outdoor PM 2.5 infiltration factor and year of construction can be derived without any field tests and can be further investigated through more data collection.
Correlation between the PM 2.5 Infiltration Factor and Building Factors
The airtightness metrics (ACH 50 , NL, EL, and ELA/FA) were selected as the dominant factors based on the ranking of the correlations of the building factors with the PM 2.5 infiltration factor. To avoid the problem of multicollinearity between the independent variables, ACH 50 , which was found to have the highest correlation among the performance indicators of airtightness with the PM 2.5 infiltration factor, was selected as the independent variable for the simple regression model. Table 9 shows the results for four kinds of bivariate linear regression of ACH 50 and the PM 2.5 infiltration factor. The coefficient of determination (R 2 ) of the linear-log regression model was found to be 0.57, indicating that this model has the highest explanatory power for the measured values. The linear-log model reflects a decreasing trend in the PM 2.5 infiltration factor as the airtightness increases; this may explain the upper bound on the infiltration factor (F in < 1.0) within the range of ACH 50 observed here. Figure 2 graphically illustrates the linear-log regression model of the PM 2.5 infiltration factor according to ACH 50 , utilizing ACH 50 and the PM 2.5 infiltration factor for case OPC-1. Table 9. Results of regressing the PM 2.5 infiltration factor (F in ) on ACH 50 .
Regression Model
Equation PM 2.5 : the indoor PM 2.5 concentration due to outdoor PM 2.5 infiltration was up to 1 times higher in highly leaky homes than in tight homes, suggesting that the indoor ex sure risks of outdoor PM 2.5 varies depending on the airtightness of the multifam home. To assess the outdoor PM 2.5 infiltration of a building according to its airtightness, the airtightness was categorized by leakage class (tight (ACH 50 : 0~5 h −1 ), leaky (ACH 50 : 5-10 h −1 ), sufficiently leaky (ACH 50 : 10~15 h −1 )), as defined in ASHRAE 119. Figure 3 shows the mean and standard deviation of the PM 2.5 infiltration factor according to airtightness level. In the tight units (n = 6), the PM 2.5 infiltration factor averaged 0.54 (±0.09), and the value estimated by the regression model was 0.56 (±0.04). In leaky units (n = 6), the PM 2.5 infiltration factor measurements averaged 0.75 (±0.09), and the estimated value was 0.72 (±0.03). In sufficiently leaky units (n = 4), the PM 2.5 infiltration factor averaged 0.81 (±0.08), and the regression model estimate was 0.82 (±0.03). These results indicate that without indoor PM 2.5 -generating sources, the PM 2.5 concentration in tight multifamily homes may be half the outdoor PM 2.5 and that sufficiently leaky units may be vulnerable to outdoor PM 2.5 : the indoor PM 2.5 concentration due to outdoor PM 2.5 infiltration was up to 1.59 times higher in sufficiently leaky homes than in tight homes, suggesting that the indoor exposure risks of outdoor PM 2.5 varies depending on the airtightness of the multifamily home.
Analysis of the data that were acquired under the OPC-2 and OPC-3 measurement conditions (seven units) was performed to compare the effects of the outdoor PM 2.5 conditions. The multifamily homes in OPC-2 (C out,avg < 35 µg/m 3 ) with a low concentration of outdoor PM 2.5 had ACH 50 values of 1.4 h −1 to 13.0 h −1 and PM 2.5 infiltration factors of 0.62 to 0.78. Unlike the differences in the airtightness, there was no significant difference between the PM 2.5 infiltration factors in the OPC-2 group. This may be due to the low outdoor PM 2.5 concentration and low outdoor-origin indoor PM 2.5 concentration: even a small measurement deviation can thus cause relatively large errors when calculating the infiltration factor. The OPC-3 group exhibited a large deviation in the outdoor PM 2.5 concentration (C out,std > 10% of C out,avg ); units in this group had ACH 50 values between 4.9 h −1 and 13.7 h −1 and PM 2.5 infiltration factors between 0.31 and 1.12. We checked the difference in the estimated PM 2.5 infiltration factor according to changes in the outdoor PM 2.5 concentration. Reductions in the concentration (C out,avg > C out,ss ) tended to be asso-ciated with a higher PM 2.5 infiltration factor compared to the regression model, while the opposite was true for increasing concentrations (C out,avg < C out,ss ). When the concentration of the outdoor PM 2.5 changed significantly, it was found that there was a lag time in the accumulation of the outdoor-origin indoor PM 2.5 concentration. The lag time that occurs when outdoor pollutants infiltrate the indoors has been identified through a cross-case analysis in a previous study [39]. Based on the results of this study, additional study on the method to compensate for the impact of outdoor fine dust conditions when conducting infiltration experiments using a blower door is needed. Figure 2. Results of regressing the PM 2.5 infiltration factor on ACH 50 . Analysis of the data that were acquired under the OPC-2 and OPC-3 measurement conditions (seven units) was performed to compare the effects of the outdoor PM 2.5 conditions. The multifamily homes in OPC-2 (C out,avg < 35 μg/m 3 ) with a low concentration of outdoor PM 2.5 had ACH 50 values of 1.4 h -1 to 13.0 h -1 and PM 2.5 infiltration factors of 0.62 to 0.78. Unlike the differences in the airtightness, there was no significant difference between the PM 2.5 infiltration factors in the OPC-2 group. This may be due to the low outdoor PM 2.5 concentration and low outdoor-origin indoor PM 2.5 concentration: even a small measurement deviation can thus cause relatively large errors when calculating the
Conclusions
The purpose of this study was to evaluate outdoor PM 2.5 infiltration into multifamily homes in Korea according to the building characteristics utilizing a field test and a regression model. The PM 2.5 infiltration test was conducted using the blower-door depressurization procedure, and correlation analysis was used to identify the dominant building factors associated with the infiltration of outdoor PM 2.5 . A regression model for estimating the PM 2.5 infiltration factor based on the ACH 50 airtightness index was proposed. The key results of this study are as follows: • The PM 2.5 infiltration analysis was conducted for 23 target units in Korea, and the effective measurement of the PM 2.5 infiltration factor for 23 homes was 0.71 (±0. 19).
•
Analysis of the correlation between building characteristics and the PM 2.5 infiltration factor showed that ACH 50 , ELA/FA, and NL had a statistically significant (p < 0.05), strong positive correlation (r = 0.701, 0.685, 0.684) with the PM 2.5 infiltration factor. • Based on the correlation analysis, ACH 50 was selected as the dominant predictor for PM 2.5 infiltration, and a regression model (R 2 =0.57) was developed to explain the PM 2.5 infiltration rate by the ACH 50 index: F in = 0.1999 · ln(ACH 50 ) + 0.3225.
•
The analysis of the PM 2.5 infiltration rate according to the leakage class confirmed that the concentration of outdoor-origin PM 2.5 in sufficiently leaky units can be up to 1.59 times higher than that in tight units.
We presented the PM 2.5 infiltration factor for the estimation of the outdoor PM 2.5 infiltration in multifamily homes in Korea and selected ACH 50 as the dominant building factor for predicting the infiltration of outdoor PM 2.5 . These results are potentially useful for indoor exposure assessments and control measures against outdoor PM 2.5 infiltration based on the airtightness performance of domestic multifamily homes. Although this study targets Korean multifamily homes, the results could be used to estimate the outdoor PM 2.5 infiltration into homes with reinforced concrete structures which have similar characteristics. The results are also expected to be used for the calculation of dust removal loads to establish system operating strategies aimed at maintaining proper indoor air quality. As the behavior of the particles differs according to the size fraction [40,41], fine and ultrafine particles could interact differently with building characteristics. Accordingly, the study on the relationship between size-resolved particles and building factors can be conducted in future research based on the results of this study. | 2021-07-27T00:06:18.604Z | 2021-05-19T00:00:00.000 | {
"year": 2021,
"sha1": "f80e8b19c0b534d33b20a8bd9db08df1da852d45",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/10/5708/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8414dffacd6329297e6619ad5a7d5f95c87c9453",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
211049177 | pes2o/s2orc | v3-fos-license | The Effects of Increased Midsole Bending Stiffness of Sport Shoes on Muscle-Tendon Unit Shortening and Shortening Velocity: a Randomised Crossover Trial in Recreational Male Runners
Background Individual compliances of the foot-shoe interface have been suggested to store and release elastic strain energy via ligamentous and tendinous structures or by increased midsole bending stiffness (MBS), compression stiffness, and resilience of running shoes. It is unknown, however, how these compliances interact with each other when the MBS of a running shoe is increased. The purpose of this study was to investigate how structures of the foot-shoe interface are influenced during running by changes to the MBS of sport shoes. Methods A randomised crossover trial was performed, where 13 male, recreational runners ran on an instrumented treadmill at 3.5 m·s−1 while motion capture was used to estimate foot arch, plantar muscle-tendon unit (pMTU), and shank muscle-tendon unit (sMTU) behaviour in two conditions: (1) control shoe and (2) the same shoe with carbon fibre plates inserted to increase the MBS. Results Running in a shoe with increased MBS resulted in less deformation of the arch (mean ± SD; stiff, 7.26 ± 1.78°; control, 8.84 ± 2.87°; p ≤ 0.05), reduced pMTU shortening (stiff, 4.39 ± 1.59 mm; control, 6.46 ± 1.42 mm; p ≤ 0.01), and lower shortening velocities of the pMTU (stiff, − 0.21 ± 0.03 m·s−1; control, − 0.30 ± 0.05 m·s−1; p ≤ 0.01) and sMTU (stiff, − 0.35 ± 0.08 m·s−1; control, − 0.45 ± 0.11 m·s−1; p ≤ 0.001) compared to a control condition. The positive and net work performed at the arch and pMTU, and the net work at the sMTU were significantly lower in the stiff compared to the control condition. Conclusion The findings of this study showed that if a compliance of the foot-shoe interface is altered during running (e.g. by increasing the MBS of a shoe), the mechanics of other structures change as well. This could potentially affect long-distance running performance.
Key Points
Individual compliances of the foot-shoe interface have been suggested to store and return elastic strain energy during running via (1) ligamentous and tendinous structures or (2) by increasing the midsole bending stiffness, compression stiffness, and resilience of sport shoes. How these structures interact with each other when one of them is altered, however, is unknown. We showed that if one of these structures was altered (e.g. by increasing the midsole bending stiffness of a shoe), the mechanics of other compliances changed as well. Increasing the midsole bending stiffness of a running shoe reduced the deformation of the arch, the shortening of the plantar muscle-tendon unit, and the shortening velocities of the plantar and shank muscle-tendon units. This could potentially have implications on the metabolic cost of running and therefore affect long-distance running performance.
Background
In the stance phase of running, multiple structures (e.g. running shoe, foot arch, tendons, etc.) interact with each other to transmit forces produced by the lower limb muscles through the foot to the ground. Some of these structures have been suggested to store and release elastic strain energy via ligamentous and tendinous elements [1][2][3] or by increased midsole bending stiffness (MBS) [4,5], compression stiffness, or resilience [6] of a running shoe. Some of the largest elastic structures surrounding the foot-shoe interface are the Achilles tendon (AT) and the plantar aponeurosis (PA). It is commonly believed that these elastic structures store energy as they are stretched during running [7][8][9]. This energy is thought to be at least partially returned to the runner and used for propulsion in late stance [10]. The PA and AT energy return estimates were suggested to be approximately 3-17 [1][2][3] and 10-70 J/step [1,9], respectively. In vitro measurements performed by Ker et al. [1] suggested that 17% of the total lower limb mechanical work can be returned by the foot arch in the form of elastic strain energy. More interestingly, it was shown that if the stiffness of the arch was reduced (e.g. by cutting passive elastic structures), its energy return properties decreased [1]. More recent in vivo experiments, however, showed that these values were likely overestimated, and that the arch can return only up to 8% of the total lower limb mechanical work [2]. Furthermore, studies suggested that compared to barefoot running, wearing footwear may limit arch deformation and therefore alter the stiffness of the arch, potentially affecting the spring-like function of the foot [11]. Because this previous work compared barefoot versus shod running only, it remains unclear if these findings are related to wearing footwear, in general, or to systematic differences in specific footwear features such as the MBS.
The MBS of sport shoes has been shown to have large effects on lower limb biomechanics and athletic performance [5,12,13]. It was suggested that the main effects of increased MBS (e.g. by placing a carbon fibre plate along the full length of a shoe) are to (1) minimise energy loss at the metatarsophalangeal (MTP) joint [6,12], (2) store and return elastic strain energy to the foot-shoe interface [4,5], and (3) alter the force-velocity profile of the major ankle plantarflexor muscles [14,15]. These changes would then allow for more economical force generation [14][15][16] and possibly a lower energy cost of locomotion [17,18]. In brief, the principle of minimising energy loss suggests that if less negative work is performed at a joint, muscles crossing the joint perform less eccentric work, which could result in lower energy cost of locomotion because lengthening and shortening incurs a higher energy cost compared to isometric, zero net work contractions [19]. This idea, however, is hypothetical and experimental evidence supporting the principle that minimising energy loss can be used to enhance sport performance is still missing. The principle of storing and returning elastic strain energy via resilient cushioning material of the midsole suggests that the maximum possible energy storage and return can be estimated by modelling the midsole as an idealised compressive spring [20]. Similarly, it was thought that elastic energy can be stored and returned due to longitudinal bending of the midsole, which could be modelled as an idealised torsional spring [4][5][6]. Previous literature addressed the principle of storing and returning energy in the midsole due to longitudinal bending, but the results are inconclusive, as some studies suggested that the carbon fibre plates are able to store and return elastic energy indicated by more positive work performed at the MTP joint [4,5] or increased ground reaction forces (GRF) [5], whereas other studies suggested that other footwear features are the primary cause of these observed increases in positive work done at the MTP joint [6]. Lastly, the principle of optimising for muscle function suggests that the variable gearing during running (i.e. the ratio between muscle-tendon unit moment arm and GRF moment arm relative to a joint centre) [21] can be altered by changing the MBS of footwear so that muscle forces are generated at slower speeds [14,22]. This is speculated to reduce the muscle energy cost of generating the necessary forces to execute an athletic task [14,22,23].
The purpose of this study was to investigate how structures of the foot-shoe interface are influenced during running by changes to the MBS of sport shoes. Specifically, the behaviour of a plantar muscle-tendon unit (pMTU), which is a representation of the PA and intrinsic foot muscles [24,25], was studied because it spans across the full length of the foot, and therefore not only crosses the MTP but also the arch of the foot. It was hypothesised that as the MTP joint undergoes extension (i.e. dorsiflexion), and therefore negative work is performed at the joint, the pMTU will perform positive work at the arch due to the windlass mechanism [26]. Because of this mechanism, it was expected that the positive work at the arch will be reduced when running with increased MBS, as the extension of the MTP will be limited [5,6,12]. Furthermore, a secondary purpose of this study was to investigate the energetic behaviour of a shank muscle-tendon unit (sMTU), which is a representation of the triceps surae muscle and the AT. The behaviour of the sMTU was studied because it is a major positive work generator of the lower limb during running [27,28], and thus corresponds to a large portion of the total metabolic cost of running [9]. It was hypothesised that running in shoes with increased MBS would result in reduced shortening velocities of the sMTU [15]. It needs to be noted that the MTU models developed in this study represent approximations of biarticulated MTUs. As such, it is possible that the extrinsic foot muscles and the knee joint orientation may affect the pMTU and sMTU mechanics, respectively. This study, however, assumed the pMTU to represent a functional unit consisting of intrinsic foot muscles and the PA, and addressed the sMTU mechanics at its distal end, only.
Experimental Set-up and Data Collection Participants
The detailed protocol has been described previously [5]. In brief, 13 male, recreational runners (mean ± SD; height, 162.8 ± 0.5 cm; body mass, 70.5 ± 8.3 kg) performed running trials in 2 shoe conditions. All participants were moderately active, free of neuromuscular disorders and lower limb injuries in the past 6 months before participation, and fit a US men's size 9 shoe. Also, all participants gave written informed consent prior to participating in this study.
Footwear Conditions
The control condition consisted of a commercially available running shoe (Nike Free 5.0, Nike Inc., Beaverton, USA), and the stiff condition was achieved by inserting straight carbon fibre plates in between the midsole and the factory insole of the control shoe. The MBS of the entire shoe was determined using a 3-point bending test [5]. Values of 1.2 N/mm and 11.9 N/mm were obtained for the control and stiff condition, respectively. The masses of the shoe conditions were determined using a laboratory balance (Model PG4002-S, Mettler-Toledo, Columbus, USA) and were 225.67 g and 289.10 g for the control and stiff condition, respectively.
Biomechanical Testing
For in vivo biomechanical testing, participants ran on an instrumented treadmill (Bertec Corporation, Columbus, USA) at 3.5 m·s −1 in both shoe conditions. The order of conditions was randomised across participants. After participants performed familiarisation trials of 10-15 s to get accustomed to the running speed and footwear conditions, the speed of the belt was increased to 3.5 m·s −1 and data were collected for 30 s approximately 2 s after speed was attained. This familiarisation period was deemed sufficient because all participants were experienced in treadmill running [29]. Furthermore, the fact that the footwear conditions were randomised between participants should have reduced potential confounding effects of treadmill or footwear habituation. The stance phases from 30 steps were identified and used for further analyses. Three-dimensional (3D) kinematic and kinetic data were measured using eight high-speed cameras (Motion Analysis, Santa Rosa, USA) and a single force plate instrumented in the treadmill. Twenty-five retroreflective markers were mounted on the following anatomical landmarks: distal phalanx of the great toe (GT), third toe, and fifth toe; distal heads of the first (MP1) and fifth metatarsals; navicular tuberosity (NT); medial (MH), lateral, and proximal heel; medial and lateral malleolus; proximal, distal, and posterior shank; medial and lateral epicondyles; proximal, distal, and posterior thigh; left and right greater trochanter; right and left anterior superior iliac spine; and right and left posterior superior iliac spine. For this manuscript, however, only the first 16 markers mentioned above were used for analysis. Holes were cut in the shoe to allow for the application of markers on the skin overlying the distal head of the first metatarsal, navicular tuberosity, and medial heel, as participants did not wear socks during the running trials [4]. Motion data and GRFs were recorded at 240 and 1000 Hz, respectively.
Dynamometry and Ultrasound Testing
A dynamometry and ultrasound session [30] was performed immediately before the biomechanical testing to estimate the moment arm of the sMTU (MA sMTU ; i.e. Achilles tendon moment arm). For this, participants were seated in a Biodex System 3 dynamometer (Biodex Medical, Shirley, USA) and the ankle joint axis was aligned with the dynamometer axis. The foot was placed on the dynamometer foot plate and tightly secured using straps. The ankle and knee joints were oriented to 0°( neutral) and 60°(flexion), respectively. Straps were used to limit the participant's hip and thigh motion. A 50-mm linear-array probe was placed over the myotendinous junction of the AT, which captured its trajectory at a sampling frequency of 78 Hz on a Logiq E9 ultrasound system (gain 50 dB, depth 3.0 cm, frequency 13 MHz; GE Healthcare, Chicago, USA). The AT moment arm was estimated using the tendon excursion method [31]. For the tendon excursion method to be valid, it is important that the tendon elongation can be measured where no passive moment is present [32]. Thus, the foot was rotated from 0 to 20°plantarflexion, the range over which no appreciable passive moment (< 1 Nm) was present. For this, the myotendinous junction elongation was tracked manually from the ultrasound images over the entire range of motion using ImageJ (NIH, Bethesda, USA). It needs to be noted that absolute values obtained using the tendon excursion method could be erroneous [33]. In a study of a withinsubject design, however, it is assumed that this error would not differ between footwear conditions and therefore would not affect the conclusions drawn from the findings of the study.
Data Processing and Analysis MTP and Ankle Joint Kinematics and Kinetics
Raw kinematic and kinetic data were analysed using a custom written MATLAB code (Version 2019b; the MathWorks Inc., Natick, USA). Force data were downsampled to 240 Hz by performing a shape-preserving, piecewise cubic interpolation. To determine 3D MTP and ankle joint kinematics and kinetics, marker and force data were filtered using a dual pass 2nd order (i.e. zero-lag fourth order) Butterworth filter with a cut-off frequency of 50 Hz. A Newton-Euler approach was used to describe joint motion (sequence: flexion-extension, abduction-adduction, internal-external rotation), and an inverse dynamics approach was used to calculate sagittal internal joint moments to represent the moment primarily attributed to muscle forces. The MTP joint centre was estimated halfway along the MTP joint axis, which was defined by a line connecting the distal heads of the first and fifth metatarsal. The MTP joint moment was set to zero when the centre of pressure (COP) was proximal to the MTP joint axis [6,27,30]. The moment arm of the GRF to the MTP joint centre was determined as the perpendicular distance of the COP relative to the MTP joint axis [34]. Joint powers were calculated as the product of internal joint moment and angular velocity. Positive and negative joint work were determined as the integral of the positive and negative joint power-time curves over the stance phase, respectively.
Ground Reaction Force Partitioning for Multiple Foot Segments
The GRFs measured by a force plate act on a single point, the COP. During the stance phase of running, however, multiple foot segments are in contact with the ground. The different forces that act on individual foot segments cannot be determined with a single force plate. To partially overcome this limitation, a weighted probabilistic approach was used to partition the GRF for individual foot segments [24]. In brief, the magnitude of force that was partitioned for the rearfoot and midfoot depended on the vertical trajectory of each segment's centre of mass, based on a 3D marker data relative to a global coordinate system, and its antero-posterior distance to the COP. Once the COP progressed distally to the MTP joint axis, the force acting on the rearfoot was set to zero because at these instants the rearfoot was assumed to not be in contact with the ground anymore.
Arch Mechanics and Muscle-Tendon Unit Models
Marker trajectories that were used to describe arch and pMTU kinematics, namely MH, NT, MP1, and GT, were filtered with a cut-off frequency of 20 Hz. The mechanics of the arch were estimated during the stance phase of running by performing a sagittal plane analysis using the partitioned GRF and 3D trajectories of the MH, NT, and MP1 markers. The arch angle (AA) was determined as the 3D angle between two vectors, namely between MH and NT, and MP1 and NT (Fig. 1). Therefore, the NT was set as the centre of the arch joint. The angular velocity of the arch was determined by the first timederivative of the angular deformation. The rear-and midfoot centre of masses were estimated halfway between the MH and NT, and MP1 and NT markers, respectively. The arch joint moment was calculated using an inverse dynamics approach, where the forces generating the moment were assumed to be the partitioned GRFs, gravity, and joint reaction forces [24].
The sMTU and its force (F sMTU ) were estimated based on a previously described musculoskeletal model [30,35]. For this, the sagittal plane ankle joint moment was divided by the MA sMTU , which was corrected for the ankle angle [38]. sMTU stretch/shortening velocity (v sMTU ) was approximated by multiplying the ankle joint angular velocity with the MA sMTU [21] (Fig. 2). This estimated the linear velocity acting on the proximal end of the foot segment, where the sMTU was assumed to be attached. This linear velocity, however, acts perpendicular to the foot segment, which is not necessarily the orientation of the sMTU, as it was assumed to be in parallel with the shank segment. For this reason, a correction was performed that accounted for the orientation of the sMTU relative to the velocity at the proximal end of the foot using: where v sMTU is the stretch/shortening velocity of the sMTU, ω ankle is the ankle joint angular velocity, and θ foot/shank is the angular difference between the linear velocity at the proximal end of the foot segment and the shank. The force and stretch/shortening velocity of the sMTU are therefore approximations of the mechanics at the distal end of the sMTU (i.e. AT). The pMTU was based on a geometrical model using kinematic data of retroreflective markers placed on the MH, MP1, and GT [24,25]. For this, the MH and GT defined the origin and insertion of the pMTU, respectively. The length of the pMTU was estimated as the sum of distances between MH and MP1, and GT and MP1. This allowed the estimation of pMTU length changes during the stance phase of running due to relative foot segment motion. MP1 acted as a tether point which the pMTU was wrapped around. If the MTP joint was extended, the tether point rotated in conjunction with the origin of the pMTU. This increased the length of the distal part of the pMTU, between MP1 and GT. Therefore, the length of the pMTU was corrected by estimating the arc length (Arc pMTU ) due to MTP joint extension using: where Arc pMTU is the wrapping length of the pMTU around the tether point on MP1, r MP1 is the estimated radius of the distal head of the first metatarsal (i.e. 9.2 mm) [39], and θ MTP is the angular deformation of the MTP joint.
shortening/shortening velocity (v pMTU ) was determined by the first time derivative of the pMTU length changes. The pMTU force (F pMTU ) was estimated based on cadaveric work from Cheung et al. [36] using: where F pMTU is the force acting along the pMTU, F vGRF is the vertical ground reaction force, and F sMTU is the estimated AT force. Arch and MTU power were determined as the product between arch angular velocity and joint moment, and MTU stretch/shortening velocity and force, respectively. Joint and MTU positive and negative work were determined by the time integral of the positive and negative power curves, respectively. It needs to be noted that contributions from individual compartments of MTUs (i.e. the muscle or the tendon) cannot be distinguished from each other using the methods described above.
Statistics
Shapiro-Wilk tests were performed to test for normality of the variables of interest. These variables included (1) positive, negative, and net work; peak change in angle; take-off angle; peak moment; and peak flexion velocity at the second half of stance for the arch; (2) positive, negative, and net work; peak length change; take-off length; peak force; and peak shortening velocity during the second half of stance for the pMTU; and (3) positive, negative, and net work; peak shortening velocity; and peak force for the sMTU. If a Shapiro-Wilk test revealed a normal distribution, a paired t test was performed to test for significant differences between stiffness Fig. 1 Schematic of the sagittal plane model used to estimate the arch angle (AA) using markers placed on the medial heel (MH), navicular tuberosity (NT), and distal head of the first metatarsal (MP1). The shank muscle-tendon unit (sMTU) was estimated along the orientation of the shank. sMTU force (F sMTU ) was calculated based off a musculoskeletal model [35] and in vivo ultrasound imaging of the sMTU moment arm (MA sMTU ). The plantar muscle-tendon unit (pMTU) was estimated spanning from MH to the great toe (GT). pMTU force (F pMTU ) was approximated using vertical ground reaction forces and F sMTU [36]. Modified from [37] conditions; otherwise, the Wilcoxon signed-rank test was used. The significance level α was set to 0.05, and the Benjamini-Hochberg method was used to correct for multiple comparisons by adjusting individual p values [40]. Effect size estimates were calculated using Cohen's d to aid in the interpretation of significant findings.
Discussion
The purpose of this study was to investigate how structures of the foot-shoe interface are influenced during Fig. 2 The shank muscle-tendon unit velocity (v sMTU ) was estimated using the ankle angular velocity (ω ankle ), the linear velocity at the proximal end of the foot (v linear ), the shank muscle-tendon unit moment arm to the ankle joint centre (MA sMTU ), and the angle between v linear and the shank segment (θ foot/segment ). Modified from [37] running by changes to the MBS of sport shoes. It was hypothesised that running with increased MBS would result in less positive work performed at the arch as the windlass mechanism is limited due to reduced MTP joint extension. The findings of this study supported the first hypothesis, as the positive work performed at the arch was lower in the stiffer shoe condition. Lower positive work at the arch when running with increased MBS occurred due to reduced arch flexion velocities and pMTU shortening velocities. No differences in arch flexion moments or pMTU forces were observed between stiffness conditions. Furthermore, less arch extension and flexion, and less pMTU stretching and shortening were observed in the stiff compared to the control condition. Also, at take-off, both structures were in a more extended and lengthened position when running in the stiff condition. Running with increased MBS altered the spring-like function of the foot by reducing the deformation of the arch and the stretching and shortening of the pMTU. Maintaining the same athletic movement (i.e. treadmill running) while reducing the mechanical work contributions of MTUs by increasing the MBS of sport shoes could be indicative of more efficient locomotion.
Previous studies have proposed that running with increased MBS resulted in better athletic performance because it allowed the major ankle plantarflexor muscles (e.g. triceps surae) to generate force more economically [15,22]. For this reason, it was hypothesised that running in the stiff condition would result in lower shortening velocities of the sMTU. The results of this study showed significantly lower shortening velocities of the sMTU in the stiff compared to the control condition. There were no differences in sMTU forces between stiffness conditions. The net work performed by the sMTU, however, was lower in the stiff condition. Reduced shortening velocities of the entire sMTU can originate from slower shortening of the muscle (i.e. triceps surae) or the tendon (i.e. AT) in series. In the first case, reduced shortening velocities of the muscle could be indicative of reduced rates of force generation. Changes in rates of force generation have been shown to be related to changes in metabolic cost of running (i.e. cost of force generation hypothesis) [41,42]. The cost of force generation hypothesis suggests that increases in metabolic cost are inversely proportional to contact time [43]. In support of this hypothesis, the contact times in this data set were significantly increased by~13 ms per step when Fig. 3 Group mean ± standard deviation (shaded area) of the metatarsophalangeal joint (MTP; first row), plantar muscle-tendon unit (pMTU; second row), arch (third row), and shank muscle-tendon unit (sMTU; fourth row) angle/length change (first column), (angular) velocity (second column), force/moment (third column), and power (fourth column) across the stance phase of running in the control (blue full line) and stiff (red broken line) conditions running in the stiff compared to the control condition, as described previously [5], which represents a 4.76% decrease in rate of force generation. This reduced rate of force generation should reduce the metabolic cost of running because the triceps surae muscles can generate the same force at a slower velocity, thus reducing the level of motor unit recruitment [23]. It is plausible to speculate that a~5% reduction in the triceps surae rate of force generation could contribute to subtle differences in metabolic cost of running between stiffness conditions. In the latter case, where differences in sMTU shortening velocity originated from slower tendon shortening, this could have implications for the energy return properties of the tendon. Slower tendon shortening would result in reduced positive power and therefore less returned energy by the tendon. Many studies have speculated that energy storage and return of tendinous structures are beneficial for running [7,44]. Therefore, if running with increased MBS reduced tendon-shortening velocities, and therefore released energy, then this could be considered disadvantageous for running performance. This study, however, cannot address if the changes in sMTU shortening velocity were due to slower shortening of the muscle or the tendon. It can only address the fact that running with increased MBS has an effect on the shortening velocities of MTUs of the foot-shoe interface. Therefore, future studies should try to answer this question by using in vivo ultrasound imaging of the muscle fascicles of the sMTU (i.e. gastrocnemius medialis, gastrocnemius lateralis, or soleus muscle) or the myotendinous junction of the AT [15].
The results of this study showed that the pMTU length at take-off was shorter than at heel-strike. This is probably due to the windlass mechanism. It is hypothesised that the PA pulled the calcaneus closer to the distal metatarsal heads as the MTP joint underwent extension, and therefore, the pMTU was shortened [26,45]. In the stiff condition, however, the peak MTP joint extension was lower, which likely limited the wrapping of the pMTU around the distal metatarsal heads, and therefore, the length at take-off was closer to its initial length at heel-strike compared to the control condition.
Although the MTP joint showed increased flexion velocities in the stiff condition, the pMTU, which crosses the MTP joint and therefore contributes to MTP joint mechanics, showed slower shortening velocities. This means that the increased MTP joint flexion velocities must be caused by some other mechanism than the behaviour of the pMTU. It is possible that the carbon fibre plates that were inserted to increase the MBS of the running shoe are related to the increases in MTP joint flexion velocities. Elastic strain energy that is stored in the carbon fibre plates as the MTP joint undergoes extension could be returned during late stance, increasing joint flexion velocities.
Kelly et al. [11] proposed that the foot-shoe interface can be modelled as two springs that act in series, where the viscoelastic midsole of a shoe and the foot arch behave like compressive springs with given stiffnesses. This model is based on the assumption that the neuromuscular system aims to maintain a constant system stiffness during locomotion [46,47]. The findings of this study showed that if the MBS of a running shoe, which is thought to behave as a torsional spring [4][5][6], is increased, the deformation of linear/rotational compliances (e.g. arch, pMTU) surrounding the foot-shoe interface is reduced. The forces and moments acting on these structures, however, did not differ between footwear conditions. Therefore, if the mechanical load on these structures remained the same but the deformation was reduced, it can be concluded that the individual apparent stiffness increased. This further supports previous findings by Kelly et al. [11] that the footshoe interface can be modelled as multiple compliances that act in series: where k foot/shoe is the system stiffness of the foot-shoe interface, k shoe is the MBS of a shoe, k arch is the stiffness of the arch, and k pMTU is the stiffness of the pMTU. It needs to be noted that cushioning stiffness and MBS of a shoe are two distinctive shoe properties that should not be used interchangeably; however, the findings of Kelly et al. [11] and the findings of this study suggest that by increasing either of these stiffnesses, similar increases in apparent arch stiffness can be observed.
Limitations
There are some limitations associated with this study. The MTU models developed in this study represent approximations of biarticulated MTUs. For the sMTU model, the main purpose was to estimate its mechanics at its distal end because the ankle joint was suggested to be the main positive work generator of the lower limb during running [27]. It is possible, however, that the orientation of the joint at the proximal end of the sMTU (i.e. knee joint) could have influenced its mechanics, which was not accounted for in the model used in this study. Furthermore, the pMTU model represented a functional unit consisting of intrinsic foot muscles and the PA. It is likely, however, that the pMTU also included contribution from extrinsic foot muscles (e.g. tibialis posterior). The methods used to estimate MTP/ankle [5,6,27] and arch [24] kinetics differed from each other. Three light-reflective markers were placed on the distal and proximal segment, respectively, of the MTP and ankle joint to measure segmental angular acceleration. For the arch, placing three markers on the distal (metatarsals) and proximal (calcaneus) segment, respectively, had required cutting additional holes in the shoe. This, however, could have compromised the structural integrity of the shoe. Because different methods were used to estimate MTP/ankle and arch joint kinetics, the interpretation of joint moments, powers, and work should focus on differences between footwear conditions instead of the differences between joints.
Similarly to Riddick et al. [24], the pMTU model used in this study is an estimate of unified PA and intrinsic foot muscle behaviour. Parsing out the contributions of individual structures of the foot cannot be done using this method. Electromyographic analyses of intrinsic foot muscle function have suggested that different muscles can show different activation patterns during the stance phase of walking and running [25]. Therefore, it is possible that some intrinsic foot muscles have shortened more than others. Although individual length changes and specific functions of various intrinsic foot muscles and the PA cannot be distinguished between in this study, it can be assumed that these structures of the foot act as a functional unit in response to the GRFs and arch deformation experienced during running [25].
We estimated the pMTU force as a function of AT force (i.e. sMTU force) and vertical GRF [36]. Other methods have been proposed to approximate the PA [48] or pMTU [24] forces in previous literature. Therefore, we also used the methods of Erdemir et al. [48] and Riddick et al. [24] to compute and compare pMTU forces. In brief, Erdemir et al. determined the PA force by a linear relationship between AT force and vertical GRF, similar to this study. Riddick et al., however, estimated the pMTU force by dividing the arch moment by the distance between the arch joint and the insertion of the pMTU on the calcaneus. When these methods were used to approximate pMTU forces for our data, peak pMTU forces were 35.20 ± 8.55 N·kg −1 and 127.04 ± 37.95 N·kg −1 for the control, and 34.46 ± 9.34 N·kg −1 and 118.61 ± 26.20 N·kg −1 for the stiff condition. There was no significant difference in peak pMTU force between footwear conditions using the methods of Erdemir et al. . It needs to be noted, however, that absolute pMTU forces varied tremendously between methods. In general, estimated muscle or MTU forces are overestimated when using a musculoskeletal modelling approach [49]. In addition, the estimation of pMTU force by the methods of Riddick et al. [24] is strongly dependent on the location and trajectory of the arch joint centre. Therefore, it needs to be acknowledged that absolute values of pMTU forces are probably not correct; however, no matter what approach was chosen to estimate pMTU forces, the conclusions drawn in this study are regarding a change in pMTU forces as a function of footwear condition and thus, remain the same.
The stretch/shortening velocities of the pMTU and sMTU were based on time-derivation of estimated length changes and angular velocity transformations, respectively. None of the MTU velocities were based on measured length changes. Therefore, it cannot be guaranteed that the reported values are true representations of the MTU behaviour on a tissue level. Accordingly, the results reported in this study should be interpreted in the context of these limitations and it should be focused on comparisons of estimate values between footwear conditions. Future studies should try to address these limitations by using in vivo ultrasound imaging to better approximate the true stretch/shortening velocities of MTUs surrounding the foot-shoe interface [15].
Conclusions
In conclusion, running in shoes with increased MBS resulted in less deformation of the arch and pMTU, and in slower shortening velocities of the pMTU and sMTU during late stance. Slower shortening velocities of MTUs led to reduced positive work performed by the compliances surrounding the foot-shoe interface. Based on the cost of generating force hypothesis [40,41], it can be speculated that slower shortening velocities due to increased stance times could be related to lower metabolic rates of running if the reduced MTU shortening velocities are attributed to the muscle. If the slower shortening velocities are attributed to the tendon, however, it could be indicative of reduced energy return capacities of the tendon. Future studies should determine if the observed changes in shortening velocities are due to changes in muscle or tendon behaviour to further elucidate the effects of MBS on the energetics of running. | 2020-02-07T20:39:33.339Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "2e3ada9aa2f692ca1483531b6cfc9c8e04a7a4d1",
"oa_license": "CCBY",
"oa_url": "https://sportsmedicine-open.springeropen.com/track/pdf/10.1186/s40798-020-0241-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc678945fad6bd24138600eb39b7f8103102122e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
257258476 | pes2o/s2orc | v3-fos-license | Association between burnout and adherence with mask usage and additional COVID-19 prevention behaviours: findings from a large-scale, demographically representative survey of US adults
Objectives Studies have found associations between occupational burnout symptoms and reduced engagement with healthy behaviours. We sought to characterise demographic, employment and sleep characteristics associated with occupational burnout symptoms, and to evaluate their relationships with adherence to COVID-19 prevention behaviours (mask usage, hand hygiene, avoiding gatherings, physical distancing, obtaining COVID-19 tests if potentially infected). Methods During December 2020, surveys were administered cross-sectionally to 5208 US adults (response rate=65.8%). Quota sampling and survey weighting were employed to improve sample representativeness of sex, age and race and ethnicity. Among 3026 employed respondents, logistic regression models examined associations between burnout symptoms and demographic, employment and sleep characteristics. Similar models were conducted to estimate associations between burnout and non-adherence with COVID-19 prevention behaviours. Results Women, younger adults, unpaid caregivers, those working more on-site versus remotely and those with insufficient or impaired sleep had higher odds of occupational burnout symptoms. Burnout symptoms were associated with less frequent mask usage (adjusted odds ratio (aOR)=1.7, 95% CI 1.3–2.1), hand hygiene (aOR=2.1, 95% CI 1.7–2.7), physical distancing (aOR=1.3, 95% CI 1.1–1.6), avoiding gatherings (aOR=1.4, 95% CI 1.1–1.7) and obtaining COVID-19 tests (aOR=1.4, 95% CI 1.1–1.8). Conclusions Disparities in occupational burnout symptoms exist by gender, age, caregiving, employment and sleep health. Employees experiencing occupational burnout symptoms might exhibit reduced adherence with COVID-19 prevention behaviours. Employers can support employee health by addressing the psychological syndrome of occupational burnout.
INTRODUCTION
Occupational burnout, a psychological syndrome resulting from chronic workrelated stress, 1 2 is experienced across occupations. 3 Initially described by Greene in his 1961 novel A Burnt-Out Case 4 and later operationalised by Maslach,[5][6][7] burnout is framed as a psychological syndrome characterised by emotional exhaustion, depersonalisation and a reduced sense of professional efficacy. Together, these dimensions of burnout, which are a product of the work activity rather than individual characteristics, cause maladaptive cognitive, emotional and attitudinal states, which are compounded by projection of negative behaviours exhibited towards work and peers. 5 Empirical data indicate that exhaustion and depersonalisation represent core dimensions of occupational burnout, while perceived lack of professional fulfilment or efficacy precedes or follows burnout. 8
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ The recruitment of more than 5000 respondents and application of both demographic quota sampling and postsample survey weighting of responses supported the assemblance of a large-scale, demographically representative sample from which a subset of employed adults was selected to form the analytical sample. ⇒ The inclusion of an expansive set of demographic, sleep and COVID-19-related variables enabled comprehensive characterisation of the survey sample. ⇒ The cross-sectional study design limits the ability to infer causality. ⇒ Self-report data may be subject to recall, response and social desirability biases.
Open access
Burnout is increasingly recognised as an occupational hazard, with workplace factors (eg, workload, job autonomy, perceived support from leadership) exhibiting robust associations with burnout symptoms, and psychosocial factors related to well-being (eg, mental health, sleep, social support) exhibiting bidirectional relationships with burnout. 9 Employees experiencing burnout symptoms face elevated risk of adverse consequences, including for physical health (eg, cardiovascular, metabolic and respiratory conditions) and psychological health (eg, depression, anxiety, insomnia), as well as impaired workplace performance (eg, absenteeism, presenteeism, job dissatisfaction, occupational injuries) and reduced practice of healthy behaviours (eg, a predilection for unhealthy substance use, dietary indiscretion, physical inactivity, reduced handwashing). 10 With the COVID-19 pandemic came profound changes to workplace factors for many employers and employees. Indeed, many employed US adults experienced workrelated changes in 2020 in response to the pandemic. Approximately one-third transitioned from in-person to remote work, 11 and many experienced lay-offs or furloughs. To provide essential services or manage staff reductions, others took on extended-duration shifts and long work-weeks, potentially contributing to sleep deficiency and circadian disruption, which are factors associated with burnout. 12 13 Specifically, insufficient sleep has been hypothesised as a mechanism contributing to burnout through impaired recovery, chronic depletion of energy stores and hyperactivity-related dysregulation of the hypothalamic-pituitary-adrenal axis, resulting in a chronically increased allostatic load and ultimately burnout. 14 15 Insufficient sleep is common, as one-third of US adults report insufficient sleep 16 and many live with undiagnosed and untreated or undertreated sleep disorders. 17 Some of these worksite and employment changes could alleviate burnout (eg, reduced commute time, affording increased opportunity for sleep), while others may exacerbate burnout (eg, reduced work-and-home separation).
Occupational burnout can negatively influence individual workers and the people with whom they interact. 5 For example, if an individual is affected by burnout, evidence suggests they are less likely to seek medical care for health concerns. 18 19 Prepandemic studies have also found negative associations of burnout with hand hygiene among nurses, 20 and with adherence with personal protective equipment utilisation and work-safety practices among firefighters. 21 Findings linking lower adherence with safety measures and burnout are particularly relevant during infectious disease outbreaks such as the COVID-19 pandemic, when reduction of community transmission of SARS-CoV-2 depends on engagement with healthy behaviours. The extent to which occupational burnout is associated with reduced engagement with behaviours recommended to protect against COVID-19, however, is not known.
A growing body of evidence reports sleep and occupational factors associated with burnout among healthcare professionals during the COVID-19 pandemic; [22][23][24][25] however, comparatively little research has focused on burnout across occupational sectors. Furthermore, studies conducted during the COVID-19 pandemic have found unpaid caregivers for children and adults, young adults, women and essential workers have disproportionately experienced adverse mental health symptoms, [26][27][28][29][30] but our understanding of how these and other demographic factors relate to burnout during the pandemic is limited.
To address the research needs of (1) investigating the impact of occupational burnout on health behaviours in response to the COVID-19 pandemic and (2) identifying key factors associated with burnout to inform tailored workplace strategies, we examined burnout symptoms, associated sleep, demographic and occupational factors and adherence with COVID-19 health behaviours in a demographically representative sample of employed US adults.
Study sample
To assess occupational burnout symptoms in December 2020, internet-based surveys were administered by Qualtrics to US adults aged ≥18 years as part of The COVID-19 Outbreak Public Evaluation (COPE) Initiative. The COPE Initiative (https://www.thecopeinitiative.org/) is designed to assess public attitudes, behaviours and beliefs related to the COVID-19 pandemic and to evaluate mental and behavioural health during the pandemic. The COPE Initiative surveys included in this analysis were administered by Qualtrics. Quota sampling and survey weighting were employed to improve sample representativeness of the US population by sex, age and combined race and ethnicity. Surveys were administered cross-sectionally to eliminate potential for survivorship bias, 31 a source of selection bias in which survey respondents who consistently participate in longitudinal studies have better baseline mental health and mental health trajectories compared with those who attrite.
A minimum age of 18 years and residence within the USA were required for eligibility to complete a survey in December 2020. All surveys underwent data quality screening procedures including algorithmic and keystroke analysis for attention patterns, click-through behaviour, duplicate responses, machine responses and inattentiveness. Country-specific geolocation verification via IP address mapping was used to ensure respondents were from the USA. Respondents who failed an attention or speed check, along with any responses identified by the data-scrubbing algorithms, were excluded from analysis.
Measures
Burnout was assessed using the single-item Mini-Z, a non-proprietary measure of the emotional exhaustion dimension of burnout across occupations. 32 Higher Mini-Z scores from 1 through 5 reflect progressively Open access more severe burnout symptoms. Respondents who score ≥3 out of 5 generally screen positive for burnout symptoms. The Mini-Z has been validated using the emotional exhaustion subscale of the widely administered proprietary Maslach Burnout Inventory. The validation study included 5404 participants associated with the Veterans Health Administration, including primary care providers, registered nurses, clinical associates and administrative clerks. Using the emotional exhaustion subscale of the Maslach Burnout Inventory as a comparator, the Mini-Z had a 0.79 correlation, 83.2% sensitivity, 87.4% specificity and 0.93 area under the receiver operating characteristic curve. 32 Importantly, results were similar when stratified by respondent occupation, which suggests some level of generalisability of the measure across occupations.
Demographic variables included gender, age, combined race and ethnicity, disability status as assessed as a positive response to item 7.22 or 7.23 of the 2015 Behavioral Risk Factor Surveillance System Questionnaire, education attainment, US Census region and selfreported urbanicity. Employment-related characteristics included employment status, paid work hours per week, percentage of work hours completed remotely (ie, not in-person) and job sector. Unpaid caregiver status was assessed, both for adults aged ≥18 years and for children or adolescents aged <18 years. Sleep characteristics included self-reported sleep duration, insomnia symptoms assessed using the clinically validated 2-item Sleep Condition Indicator 33 and history of diagnosed sleep or circadian disorders and whether or not respondents were receiving treatment or taking medication for these conditions.
Frequency of adhering with COVID-19 protective behaviours was assessed using a 5-item Likert scale with Never, Rarely, Sometimes, Often and Always as response options. The question 'In the last week, how frequently did you…' was asked with the following behaviours: avoid gatherings for ≥10 persons; avoid going to places where you could not stay 6 feet away from people outside your household unit; wear a mask or cloth face covering when in public; wash your hands with soap and water after touching high-touch surfaces in public (eg, shopping carts, gas pumps, automated teller machines); and use hand sanitiser after touching high-touch surfaces in public. Hand hygiene was considered as frequency of either washing hands or using hand sanitiser, with the higher frequency designated. Mask usage and hand hygiene were only assessed among respondents who indicated they had been in public in the prior week. Multivariable models were constructed with Rarely and Never collapsed into a single response option given the similar public health implications for both scenarios.
Likelihood of obtaining a COVID-19 test if potentially infected with SARS-CoV-2 was assessed using a 3-item Likert scale with Not at all likely, Somewhat likely and Very likely as response options. Respondents could also select 'Don't know/Not sure' or 'I do this anyway'. The question 'If you thought you might have COVID-19, how likely would you be to do the following?' was asked with the following specified as getting tested for COVID-19. Multivariable models included all employed respondents who did not select 'Don't know/Not sure' or 'I do this anyway'.
Statistical analysis
Survey weighting (iterative proportional fitting, trimmed with 1/3≤weight≤3) was employed to improve sample representativeness of the US adult population by sex, age and combined race and ethnicity using 2010 US Census estimates. Sex and gender were assessed separately. Sex was used to weight based on population estimates. Gender was used as a demographic variable in the analysis.
To evaluate potential associations with demographic, employment and sleep characteristics and occupational burnout, weighted ordinal logistic regressions were used to estimate adjusted odds ratios (aORs) for Mini-Z burnout scores. All adjusted models for potential associations between demographic, employment and sleep-related characteristics and burnout symptoms included gender, age, combined race and ethnicity, disability status, education attainment, US Census region, rural/urban residence, unpaid caregiver status, paid weekly work hours and remote work percentage. Separate models were used to evaluate potential associations with other employmentrelated variables and sleep-related variables.
To evaluate potential associations with COVID-19 health behaviours, weighted ordinal logistic regressions with occupational burnout as explanatory variables were used to estimate aORs for lower frequency of mask wearing, hand hygiene, avoiding gatherings of ≥10 persons and physical distancing from others, and for lower likelihood of obtaining a COVID-19 test if the respondent believed they might have an active SARS-CoV-2 infection. All adjusted models for potential associations between burnout symptoms and non-adherence with COVID-19 health behaviours included these previously listed variables, plus job sector.
Statistical significance was assessed as p<0.05. Rounded, weighted values are reported. Analyses were conducted in R V.4.0.2 with the R survey package using V.3.29 and Python V.3.7.8. All participants provided informed electronic consent prior to enrolment in the survey.
Patient and public involvement
None.
Demographic characteristics associated with greater odds of more severe occupational burnout included younger compared with older age (eg, aged 18-24 vs ≥65 years, burnout symptom prevalence=37.6%, 5.7%, respectively; aOR=3.3, 95% CI 2.1-5.3), women compared with men (30.9%, 21.3%; aOR=1.6, 95% CI 1.4-1.9) and Hispanic or Latino adults compared with non-Hispanic white adults (33.1%, 22.4%; aOR=1.7, 95% CI 1.3-2.3) (tables 1 and 2). Employment characteristics associated with increased odds of more severe occupational burnout included evening or night shifts compared with day shifts 3 and 4). Additionally, odds of more severe burnout symptoms were higher among individuals who had diagnosed sleep or circadian disorders (insomnia, obstructive sleep apnoea, shift work disorder) Open access who were not receiving treatment or taking mediation compared with individuals who were not diagnosed with these disorders, but not among those with these diagnosed sleep or circadian disorders who were receiving treatment or taking medication (tables 3 and 4). Employed US adults who were experiencing burnout symptoms had greater odds of less frequently adhering with COVID-19 health behaviours (table 5). Adjusting for demographic and employment characteristics, those who were experiencing burnout symptoms had greater odds of having less frequently worn a mask when in public (aOR=1.7, 95% CI 1.3-2.1), practised hand hygiene (aOR=2.1, 95% CI 1.7-2.7), avoided gatherings of ≥10 persons (aOR=1.4, 95% CI 1.1-1.7) or maintained a 6-foot physical distance from others (aOR=1.3, 95% CI 1.1-1.6); all p<0.05. Individuals with burnout symptoms also had higher odds of being less likely to obtain a COVID-19 test if they thought they may be infected with SARS-CoV-2 (aOR=1.4, 95% CI 1.1-1.8, p=0.0096).
DISCUSSION
More than one-quarter of 3026 employed US adult respondents were experiencing occupational burnout symptoms in December 2020. Occupational burnout was associated with less frequent practice of COVID-19 prevention behaviours, including mask usage. Women, younger adults, unpaid caregivers, Hispanic or Latino adults and those working more on-site versus remotely more commonly experienced burnout symptoms than employed adults in comparator demographic groups. Working night and evening shifts, short sleep duration and insomnia symptoms were also associated with burnout symptoms. Finally, individuals with untreated sleep or circadian disorders, but not those with such disorders receiving treatment, had greater odds of burnout symptoms than those without these disorders.
Burnout symptoms were associated with reduced engagement in personal COVID-19 protective behaviours, as employees experiencing occupational burnout symptoms had greater odds of less frequent practice of behaviours to protect against COVID-19, including mask usage, practice of hand hygiene, avoidance of in-person gatherings and maintenance of physical distance. Reduced engagement in COVID-19 protective behaviours, which persisted after adjusting for demographic and employment characteristics, provides further evidence of adverse consequences of the occupational hazard of burnout.
To our knowledge, this study is the first to identify the negative association between burnout symptoms and COVID-19-recommended health behaviours in a general occupational sample, revealing associations that align with
Open access
prepandemic burnout and safety practice research. 20 21 Critically, our findings extend occupation-specific and hand hygiene-specific findings during the COVID-19 pandemic, including associations between burnout symptoms and (1) reduced hand hygiene among healthcare workers in China, 34 (2) reduced personal protective equipment adherence and hand hygiene among healthcare workers in Malaysia 35 and (3) reduced handwashing behaviours among restaurant kitchen chefs in China. 36 Interestingly, a moderation analysis conducted on frontline healthcare professionals in Pakistan found that high levels of handwashing buffered the negative influence of burnout on mental health, 37 identifying another relation that merits attention. Our findings also add to prepandemic literature describing reduced healthcare-seeking behaviours commonly reported among individuals with burnout. 18 19 Notably, we found that if affected by burnout, employees were less likely to obtain a COVID-19 test if potentially infected. Amidst a broader observation of deferred or neglected medical care during the pandemic, 38 39 whether burnout has also influenced other healthcare-seeking behaviour at this time is unknown. Community-supported and employer-supported programmes targeted towards reducing occupational burnout may improve adherence with COVID-19 health behaviours among employees, which could benefit both employees and those with whom they interact. Moreover, clinicians and providers should recognise the reduced healthcare seeking associated with burnout symptoms and could consider proactive screening in populations that disproportionately experience burnout.
Occupational burnout symptoms were disproportionately experienced by specific populations, including women, younger adults and unpaid caregivers, which is consistent with prepandemic data 1 and evidence from Germany during the COVID-19 pandemic. 40 Importantly, Meyer et al found that employed women with job autonomy and partner support had better psychological health during the pandemic, highlighting value in protective factors. Our findings of burnout among young persons and unpaid caregivers closely align with broader mental health research that has revealed that these populations have disproportionately experienced adverse mental health symptoms, including depression and anxiety symptoms. [26][27][28][29][30] Occupational burnout symptoms may be another area of concern for these populations. There is debate regarding the extent to which burnout symptoms may overlap with depression and anxiety symptoms, 5 yet recent findings show these conditions to be distinct, 41 and, to our knowledge, there is no evidence of this overlap using the Mini-Z burnout measure administered in the current study.
Further research is needed to understand and alleviate contributors to burnout within disproportionately affected populations in the workforce (eg, women, caregivers, young adults). Intervention efforts could focus on restructuring social and economic systems to reduce gender and racial pay gaps, 42 43 which create inequitable opportunities for these populations to have living wages. Concurrent efforts could focus on developing support systems for additional factors that might more broadly contribute to occupational burnout, including essential work in low-wage jobs and economic insecurities for younger persons, increased need for daytime childcare for those in virtual-learning environments and disruptions to the provision of care for adults. For employers, considerations could include improving access to and accessibility of employment-based mental health services and providing mindfulness-based programmes or seeking to improve recognition among employees given promising findings of reduced burnout associated with these measures. 44 45 More broadly, as outlined in the 2022 US Surgeon General's Advisory on Health Worker Burnout, 46 addressing occupational burnout will require recognition that burnout is a distinct workplace phenomenon demanding system-oriented, organisational-level solutions beyond individual-level support. Compared with day shift workers, employees working evening and night shifts had higher odds of burnout symptoms. These results are consistent with prepandemic data, 12 and with recent research conducted during COVID-19 in frontline healthcare workers. 47 Shift work is increasingly common across occupations, including those outside of healthcare and other frontline professions. 48 Therefore, by including Open access employees from a range of job sectors, our findings highlight the association between burnout symptoms and night or evening shift work among the general working population during the pandemic. Of further relevance to the general working population is the potential impact of working remotely on burnout symptoms, given over one-third of employed adults transitioned to remote work during the pandemic. 11 Working remotely only a small amount of time with most of their work completed on-site, less extensive remote work has been shown to result in lower job satisfaction and higher work-family conflict, 49 which are factors shown to increase the risk of burnout. 50 Considering 30% of our sample reported combined on-site and remote work arrangements, our findings may have implications for enhanced monitoring of burnout symptoms in these sectors of the workforce. Beyond demographic and employment characteristics, employed adults with sleep deficiency or insomnia symptoms had higher odds of more severe burnout symptoms. The relationship between sleep deficiency and burnout symptoms is consistent with findings from a study of a US adult general population sample with objective wearable devices to measure sleep-wake data, in which persistently short sleep duration and sleep duration shortened during the pandemic were each associated with burnout, anxiety, and depression symptoms. 51 Additionally, untreated or potentially undiagnosed sleep or circadian disorders (ie, insomnia, obstructive sleep apnoea, shift work disorder) were associated with more severe burnout symptoms but treated diagnosed sleep and circadian disorders were not. Prepandemic research has reported similar relationships between untreated and undiagnosed sleep disorders and burnout symptoms in healthcare workers, 52 which, together with our findings, highlight the potential protective role that treatment of sleep and circadian disorders may have in reducing burnout symptoms. With sleep deficiency and undiagnosed and untreated sleep disorders common among US adults, 16 these findings suggest that employers may address burnout by sponsoring sleep disorder and sleep enhancement or fatigue reduction workplace health promotion programmes, which were offered by less than 10% of US worksites in 2017. 53 Clinicians and healthcare systems could also contribute to diagnosing and treating sleep disorders to mitigate burnout symptoms among broader health improvements. Improving sleep health may also reduce the economic impact of sleep deficiency, which was estimated to cost US businesses US$411 billion annually. 54 Strengths of this study include assessment of burnout in a demographically representative sample of more than 3000 employed US adults spanning across occupations, use of a validated instrument to assess burnout symptoms and application of measures to reduce non-response bias during (demographic quota sampling) and after (survey weighting) data collection. Moreover, demographic, employment and sleep characteristics were comprehensively characterised and adjusted for in multivariable analyses, and multiple COVID-19 prevention behaviours were assessed and included in this analysis. Finally, a crosssectional study design was used to eliminate potential for survivorship bias to influence relationships. 31 Limitations of this study include the use of self-report data, which are subject to recall, response and social desirability biases, especially for COVID-19 health behaviours. Additionally, the single-item Mini-Z is validated to assess the emotional exhaustion dimension of occupational burnout; future studies could focus on the depersonalisation and reduced personal accomplishment dimensions. Moreover, the Mini-Z was validated in a sample of clinical and administrative primary care staff. Encouragingly, agreement and discrimination statistics from the validation study support the generalisability of the Mini-Z across occupations included in the validation study, though additional studies could characterise the psychometric properties of the Mini-Z across more diverse occupations. Moreover, cross-sectional findings do not demonstrate causality. While a comprehensive set of variables was included in multivariable analyses, confounding factors might partially account for relationships reported in this analysis. Finally, although quota sampling methods and survey weighting were employed to improve representativeness, this internet-based sample may not be fully representative of the 2020 employed adult US population.
CONCLUSION
In this demographically diverse sample of 3026 employed US adults, occupational burnout symptoms were more common among respondents who were of younger age or female gender, those with lesser remote work or with unpaid caregiver roles and those with insufficient or impaired sleep. In turn, occupational burnout symptoms were associated with non-adherence with key COVID-19 prevention behaviours, including hand hygiene, mask usage, physical distancing, avoiding gatherings and obtaining COVID-19 tests if potentially infected. Future studies should explore the extent to which employers can support the health of their employees by implementing strategies to address occupational burnout, such as promotion of work-life balance and sponsorship of sleep enhancement programmes and other wellness promotion programmes. Addressing occupational burnout and providing resources to reduce burnout among employees could reduce non-adherence with COVID-19 prevention behaviours. Contributors MÉC, CAC, SMWR, MEH and RIL designed the study. MÉC, APW and RIL conceived the manuscript. MÉC worked with Qualtrics research services to administer the survey, and analysed the data with guidance from all authors. MÉC created the tables and wrote the first paper draft with APW. All authors provided critical input and revisions to the paper. SMWR, MEH and RIL supervised. MÉC, as guarantor, accepts full responsibility for the finished work and/or the conduct of the study, had access to the data, and controlled the decision to publish.
Open access
Funding Funding for The COPE Initiative was provided institutional grants to Monash University from the CDC Foundation, with funding from BNY Mellon, and from WHOOP. MÉC was supported by a 2020-2021 Australian-American Fulbright Fellowship.
Competing interests MÉC, CAC, SMWR and MEH report institutional grants to Monash University from the CDC Foundation, with funding from BNY Mellon, and from WHOOP. MÉC reported grants from the Fulbright Foundation sponsored by The Kinghorn Foundation and personal fees from Vanda Pharmaceuticals. APW reports serving as board member for the Sleep Health Foundation and grants from the NHMRC (APP1138322), Shell and Australasian Sleep Association. CAC reported receiving personal fees from Teva Pharma Australia, Inselspital Bern, the Institute of Digital Media and Child Development, the Klarman Family Foundation, Tencent Holdings, the Sleep Research Society Foundation and Physician's Seal; receiving grants to Brigham and Women's Hospital from the Federal Aviation Administration, the National Health Lung and Blood Institute (U01-HL-111478), the National Institute on Aging (P01-AG09975), the National Aeronautics and Space Administration and the National Institute of Occupational Safety and Health (R01-OH-011773); receiving personal fees from and equity interest in Vanda Pharmaceuticals; educational and research support from Jazz Pharmaceuticals, Philips Respironics, Regeneron Pharmaceuticals and Sanofi; an endowed professorship provided to Harvard Medical School from Cephalon; an institutional gift from Alexandra Drane; and a patent on Actiwatch-2 and Actiwatch Spectrum devices, with royalties paid from Philips Respironics. CAC's interests were reviewed and managed by Brigham and Women's Hospital and Mass General Brigham in accordance with their conflict of interest policies. CAC also served as a voluntary board member for the Institute for Experimental Psychiatry Research Foundation and a voluntary consensus panel chair for the National Sleep Foundation. SMWR reported receiving grants and personal fees from Cooperative Research Centre for Alertness, Safety and Productivity, receiving grants and institutional consultancy fees from Teva Pharma Australia, and institutional consultancy fees from Vanda Pharmaceuticals, Circadian Therapeutics, BHP Billiton and Herbert Smith Freehills.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not applicable.
Ethics approval This study involves human participants and the authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. The protocol was approved by the Monash University Human Research Ethics Committee (MUHREC) (reference number: 24036). Participants gave informed consent to participate in the study before taking part.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request. Data supporting the findings in this study are available from the corresponding author upon reasonable request and institutional approval. Reuse is permitted only following a written agreement from the corresponding author and primary institution.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | 2023-03-02T14:04:16.570Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "edd5a79358f515aa06570a2f1654246c390b0d11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "edd5a79358f515aa06570a2f1654246c390b0d11",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239981714 | pes2o/s2orc | v3-fos-license | A Glance to Teachers' Work with Resources: Case of Olcay *
When examining success in mathematics education, it should be taken into consideration that it is important to examine teachers’ work with their resources. In this study, it is aimed to examine this work through the processes of using and transforming the resources into documents. In this context, the "Documentational Approach to Didactics" is adopted as a theoretical framework. Reflective investigation method is used to analyse teacher’s documentational genesis. The study is designed as a case study, with a primary mathematics teacher whom we named Olcay, who is very open to share her experiences that is important for the research. Various interviews with and observations of the teacher are made according to the requirements of the reflective investigation method. As a result, some of the schemes of the teacher to transform her resources into documents are revealed. It is seen that some of these schemes are similar to the ones discovered before and some of them are changeable according to the area where the teaching happened.
Introduction
T eachers who open the path to building knowledge perform essential tasks that also provide information on learners' training (Altun et al., 2004;Cohen et al., 2003). These essential tasks include teachers' interaction with their resources. They interact with their resources for "selecting, modifying, collecting and creating new resources" as a daily work (Trouche et al., 2020). In this regard, it is thought that analyzing the resources and the documents that teachers integrate into their courses is crucial because it provides information on student learning and professional development (Adler, 2000;Hewson, 2004).
Teachers improve their courses by interacting with different resources over time and in parallel with the different resources they used (Ruthven, 2013). Teachers gather, select, transform, reorganize, share, implement, and revise resources within processes where design and enacting are intertwined. The documentation encompasses all these interactions (Gueudet & Trouche, 2009a). It is important to analyze documentation processes that affect both their professional development and teaching processes in this perspective.
Teachers frequently use textbooks to ensure the students' learning (Pepin & Haggarty, 2001). Additionally, they also use digital resources, written or verbal resources. . There is a need for a theoretical approach that covers all the types of resources. "Documentational Approach of Didactics" (DAD) that helps analyze the resources and documents that teachers use comes to the fore at this perspective (Gueudet & Trouche, 2009a). In this study, all the concepts and processes are analyzed as part of the DAD. Thus, the term "resource" refers to any entity (notes, training, events, books, web pages etc.) from which the teacher obtains data to structure his/her teaching. Similarly, the term "document" refers to the teacher's resources that become ready to use. Although the meaning of "document" in the daily sense can be understood as a written source, the concept of the document mentioned in this study is the information in the final state of the teacher's knowledge obtained from the resources; it does not need to be written. Other specific terms of the theoretical framework are described in detail in the next section.
Documentational Approach of Didactics
The DAD is concerned with teachers' professional development by analyzing their interaction with resources (Gueudet & Trouche, 2009a). DAD contains its specific concepts as in French didactics tradition. While some of these concepts define objects such as resources and documents, others define processes like instrumentation and instrumentalization. These processes have been adopted from the instrumental theory (Guin et al., 2006). In the documentational approach, "instrumentation" refers to the teacher's process of adapting himself/herself to the characteristics of the particular resources while using them. "Instrumentalization" represents how teachers use particular resources and shape those resources according to their methods, and aim to use them. The documentation concept is also included within the scope of the DAD. It is defined by how teachers create schemes of utilization for the resources that they regard as necessary for particular situations. From this viewpoint, it can be said that in documentational genesis, the combination of resources and the utilization of the schemes for these resources take place. This combination may be expressed as follows: Document= Resource + Utilization Scheme The documentational genesis process examining such a representation may be thought to have a static structure. However, the documentational genesis process has a considerably dynamic structure. A document contains many interrelated resources and can create resources for many documents. As for utilization schemes, just as they may be a constant organization applied for particular situations, in other words, a set of fixed professional behaviors exhibited by the teacher for certain situations, they may also be recreated during the documentational genesis process. The documentational genesis process is shown in the theoretical framework in Figure 1.
Figure 1.
A representation of documentational genesis process (Gueudet & Trouche, 2009a, pg.206) While examining teachers' transforming resources into documents, DAD also argues that this documentational genesis process is also effective on the teachers' professional development. It is necessary to examine all the documents created by the teacher and discuss his/her document system to understand a teacher's development. The resource system expresses a system created by the teacher from all the resources he/she uses, irrespective of his/her utilization schemes. However, the document system expresses a structured system in which the documents created by the teacher are correlated; in this system, the particular documents that are to be used for particular situations and the utilization of schemes of the resources are definite.
While obtaining schemes, operational invariants and action rules are taken into account. Two concepts describe operational invariants: the theorem-inaction and the concept-in-action. The theorem-inaction is the approach that an individual adopts when performing a behavior, which effectively does it. The concept-in-action is the concept that the individual acts according to and adopts. Action rules include the requirements for an individual to act. It is a set of rules that demonstrate how to act under certain conditions (Chevallard, 1985). Operational invariants and action rules together define the scheme, so, in this study, they are taken as determinatives for recognizing subtle organizations of the schemes.
In the literature, studies using DAD are focused on examining teachers' schemes to reveal their content knowledge and mathematical concepts (Gueudet, 2017;Gueudet et al., 2013;Poisard et al., 2011). Also, there are studies focusing on the metamorphosis of thinking and implementing static and dynamic resources, creating new balances between individual and collective work of teachers . In these perspectives, DAD suggests to analyze teachers' work with resources in the lens of what they prepare for their classroom practices and what is renewed in these practices. The basis of the DAD is instrumental approach in the field of technology use in mathematics classrooms (Guin, et al., 2005). The concepts instrumentation and instrumentalization is also essential in the instrumental approach. Pepin and Gueudet (2018) explain the differentiation between digital curriculum resources and educational technology. Adler (2000) and Pepin et al. (2013) also suggests o think the resource as the verb re-source: "to source again or differently" (p.207). Ball et al. (2005) states in their study that teaching cannot be reduced to the work in class, but also includes planning. Also, Psycharis and Kalogeria (2018) studied on teacher educators' work with resources. Kock and Pepin (2018) also, studied on the students' interaction with students by using DAD. In this context, this research aims to analyze how teachers organize their resources by analyzing the schemes and processes that appear in the documentational genesis. The difference between this study and other studies in which DAD is used is that the teacher's schemes related to his/ her instructional strategies are examined instead of just the mathematical concepts related to the course. The progress related to mathematical concepts in the context of pre-service and in-service training of teachers is very important, but what differentiates one teacher from another is the instructional strategies teacher utilizes. It is thought that this study will contribute to the field in this perspective.
Method
In this study, qualitative research methods were used because the aim was not to generalize the data to the universe but to deeply analyze the documentational genesis process (Creswell, 2017). The study was designed as a case study.
In the study, the reflective investigation method was used to select data collection tools and conduct data collection. This method is recommended for researchers that use the DAD by the creators of the theoretical framework. Its main principles are as follows: Long-term follow-up: Since documentational geneses are long-lasting processes and schemes develop during the process, this principle requires a detailed and long-term observation of the process. (in this study, duration is approximately six months) In-and out-of-class follow-up: The classroom is a significant environment where the teacher processes her lessons and applies the documents she creates. In addition, much of the interaction of teachers with resources takes place outside the classroom, at home, at school, in-service training courses. For this reason, it is essential to observe the teacher in these different places.
Broad collection: It involves observing all the resources that the teacher has used in documentational genesis and what they have created in the process of documentational genesis.
Reflective follow-up: It requires involving the teacher as much as possible in the data collection process. The teacher needs to be actively involved in examining the teacher's resource collection and following it in and out of the classroom. These parts can be understood only by the detailed explanations of the teacher (Gueudet et al., 2012, p. 27-28).
Concerning these principles, it is thought that the reflective investigation method is highly appropriate for such a study that investigates documentational work.
Participant Teacher: Olcay
This study aims to investigate the schemes that teachers have created in this process. In qualitative studies, when the investigation needs to go deeper, it is suggested to reduce the number of participants and increase the number of data collection sessions (Berg & Lune, 2015). In this context, the study was conducted with one participant teacher (Olcay).
Olcay is a primary mathematics teacher with ten years of experience in a public school in western Turkey. Previously, she completed the mathematics program at a university's science faculty in Turkey; then, she has taken pedagogical training to teach. She chose to work in a primary school instead of high school and completed the in-service training required. In addition, she completed the in-service training given within the scope of the FATİH project and thus, she was able to use the smartboard in the schools where she worked. Olcay mentioned that she benefited from technology by making smartboard and computer interaction in her previous school, but regretted that she could not use it due to lack of technological infrastructure.
She worked as a consultant teacher for the teacher candidates in the "Teaching Practice" internship. Olcay's behaviors about sharing her resources and her usage styles of the resources were very detailed. This situation made the researchers think that she was the most suitable for analyzing the documentational genesis process.
In the selection process of Olcay, the main point was not the excellent documents or the excessive amount of resources she used. It was Olcay's ability to explain her resources, her usage of the resources and her development styles of her lessons to make us select her. Her approach about sharing resources, being open to explaining her lesson plans and being willing to share her documentational work affected our decision. Also, her 10-year teaching experience made us think that she has the broad constant organization needed for documentational genesis. Olcay has an interest in the studies in her branch and was willing to help in this study. Due to these reasons, this study was conducted with Olcay to analyze her documentational work deeply.
Data Collection Tools
Data collection tools are developed and edited in line with the reflective investigation method (Gueudet & Trouche, 2009b;Trouche et al., 2018;Trouche & Pepin, 2014). The steps of the reflective investigation and the data collection tools are shown in Figure 2. First of all, a "Personal Information Form" was utilized to get more information about the teacher. This form was also used and recommended by Gueudet and Trouche (2009b) using DAD. In this study, this form was translated into Turkish and used in the first visit to Olcay. The form took information about the schools that the teacher graduated from, the in-service training she received, the schools she worked at before, her perspective on technology, and the points she paid attention to in general when lecturing. In the study, the personal information form was translated into Turkish. Since the cultural aspects may differ from the previous study, the form analyzed by the specialists in mathematics education and some of the questions were eliminated because some aspects do not belong to the Turkish educational system. (For example, In France, mathematics teachers have an electronic portal to share their resources, but there is no such portal for mathematics teachers in Turkey. Moreover, the exam systems show the difference between the countries).
A semi-structured diary was utilized to ensure the "in and out-of-class" principle of the reflective investigation. It was mostly aimed at getting information about the out-of-class activities that led to changes in mathematics lesson preparation. (Olcay did not properly fill out the diary, so it is retracted.) Also, the teacher was observed in school between her lessons to see how she arranged her resources. In the study, the diary was planned as a semi-structured form. It was aimed to see the teachers' in and out-ofclass ideas about her mathematics lesson. The semistructured form was examined by the mathematics education academicians and a mathematics teacher, and its final version was completed according to their opinions.
The Schematic Representation of the Resource System (SRRS) was asked to see the teacher's resources and their relations. SRRS is a data collection tool that the teacher prepares independently from the researcher and mentions her resources and usage styles. The SRRS is an unstructured diagram intrinsically because it aims to let the teachers explain their resource systems as they prefer. The shape of the diagram is unstructured for the researchers, but structured for the participant. Because, the participant was free to draw the diagram. With the help of the diagram, it was aimed to see how she represented the relationship among her resources and the resources in detail. While giving information to Olcay about the SRRS diagram, it was stated that there is no right or wrong shape. This diagram aims to see the resources used in structuring the courses and the relationship between them.
A semi-structured interview was implemented to get detailed information about the teacher's resources, opinions about using resources and documents, and what aspects she considered while preparing a lesson. In the structured part of the semi-structured interview, questions were asked about the use of resources and documents to assist the teacher, what she paid attention to in the use of resources, whether she had certain resources for certain subjects, what resources she used and how she continued to use them and to explain the changes in the course and the application methods. The semi-structured interview form was created according to expert opinions of two experienced academicians (different than the authors) in mathematics education field and a pilot study was held with a five-year experienced mathematics teacher to see the view of a teacher. The final version was completed according to their opinions. During the interviews, according to the teacher's explanations, researchers asked additional questions to the teacher.
The lessons of the teacher were observed and video-recorded. Also, she was observed in the school between her lessons. Researcher notes were taken during the observations. The notes taken were about the changes that Olcay made during the implementation of the course, which was out of her lesson plan. As Sabra (2016) mentioned, the cases mentioned in these notes were accepted as documentational incidents. After the lesson, according to the researcher's notes, brief interviews were conducted about the interesting points. Thus, the effect of the changes on the documentational genesis was confirmed by the teacher. For example, rather than making a hypothetical comment on the sudden changes the teacher made in class, the teacher was asked to explain the reasons for these changes. Thus, the validity and reliability of the observation data were increased. The observation was planned as unstructured. That's why the lessons were video-recorded to hinder the data loss and understand the important parts of the lesson using repetitive observations.
After the observations and the interviews, the researchers prepared a recall video from the recorded data. These records were selected, cut, and reunited by the researchers regarding the parts that included valuable data about the elements of the schemes. This new record was watched and interpreted by the teacher. In the recall interview, the previous lesson and the previous lesson's preparation process were seen and interpreted by the teacher. It was important in shedding light on the teacher's changes between the course preparation and the course. She was asked to comment reflectively. In this way, the teacher gained awareness about her decisions and the revisions of those decisions. It can be said that the recall interview had an intensifying effect on the validity and credibility of the SRRS, observations and interviews.
The data collection tools are implemented as in Figure 3. First, the participant teacher was informed about the study topic before the study. Also, she was reminded that the interview and observation records would never be shared with any other person. Moreover, it was guaranteed that, in all the publications, a pseudonym would be used for the teacher.
Also, triangulation was utilized to ensure the validity and reliability of the study. The interview, observation and recall interviews were the components of the triangulation. In addition, the data collection process, details about the data collection tools and data analysis explained thoroughly to provide the reliability.
Data Collection Process
The personal information form was utilized in the first visit to Olcay to obtain information about her personal and professional history. After the personal information form, a semi-structured diary was given to Olcay to fill in day by day. (A semi-structured diary means a diary that includes the concepts we expect her to mention. But, she did not fill in the diary properly. So, the diary was not analyzed.) At the same time, SRRS that showed her resources and the relations between them was requested from Olcay. She asked questions about the diagram, and it was explained that there are no such true/false versions of the SRRS, and it can shape according to the teacher herself to share how she organizes her resources. It was aimed to make her complete the diagram more smoothly.
Two weeks later than the first interview, the semistructured interview was done. The teacher's views on mathematical topics taught to her seventhgrade students and usage of resources were taken. At the same time, the topic of the lessons (pattern generalization and algebraic expressions) to observe was decided, and the time of the lesson preparation was determined.
A week later, lesson preparation was observed. During this observation, the researcher was involved in the process and asked questions simultaneously about the teacher's resources in the lesson.
In the following week, the lessons were observed and recorded by a video camera. During the observation, the researcher sat in the back seat and did not interfere with the courses. During the implementation of the lessons, notes were taken, and an interview was held according to the notes at the end of the lessons. During the interview, questions were raised about the points that attracted the researcher's attention at the lessons. Also, the researcher spent lots of time with Olcay, between her lessons, to understand her way of thinking about her lessons, students, and resources. So, the out-of-class observations were made from the beginning till the end of the research.
All the data were then transcribed and coded. Then, the proofs of schemes were identified, and they were combined to form parts of a recall video.
About three weeks after observing the lessons, a recall session with Olcay was made and discussed on the video. A three-week break was especially given because it was intended to forget the process a little so that the teacher could look like an outside eye on the lesson and lesson preparation she made.
Analysis of the Data
All the data from different data collection tools were analyzed and coded. After that, all the themes and codes were combined, and the overlapping and non-overlapping codes were specified. Then, shared themes and codes were created to reveal the schemes.
In the semi-structured diary, Olcay did not fill the diary as required. She just shared a few sentences about her experience with her daughter's homework (see in the second paragraph of the Findings section). So, the diary was not fully analyzed because of the data inadequacy. Just the sentences on her time with her daughter are utilized in the analysis.
The studies focusing on the interpretation of the SRRS diagram were considered in the examination of the SRRS diagram (Hammoud, 2012;Rocha, 2018). Firstly, the predictions were made according to earlier studies on SRRS, and the statements of Olcay supported the accuracy of the predictions.
The interviews with Olcay were audio-recorded and transcribed literally after the interviews. Camera recordings of the observation and the researcher notes were transcribed literally, and screenshots were taken where necessary. The transcripts of interviews and observations were subjected to content analysis together, and the themes and codes are revealed.
Finally, a recall video was prepared so that the teacher could explain the reasons for her behaviors more clearly. The data obtained from this recall session were also subjected to content analysis.
Findings
The data obtained from the personal information form was used to know Olcay more closely, and it was given in the part where she was introduced. This section presents findings from the semi-structured diary, SRRS diagram, interviews, lesson preparation, lesson observation, and recall interviews. All the data were analyzed together, schemes and themes and proofs of the schemes (codes) were revealed.
Olcay mentioned the mathematics exercises she had done with her daughter in her diary and drew conclusions for herself. Olcay expressed how she had made inferences in her work with her daughter as follows: "My daughter is older than my students, and I noticed that she misunderstood some subjects from the previous years… So, I
decided to increase my repetitions and examples about that issue in the class."
Although Olcay had expressed things so briefly in her diary, she mentioned the subject later in the recall interview. She thought that repetitive examples could prevent misunderstandings. So, she had this theorem-in-action: "(in a different institution outside the classroom) if a misconception is found, extra repetitions should be done to avoid it." and the associated concept-in-action is: "misconception".
Olcay's SRRS diagram is given in Figure 4a and Figure 4b.
Figure 4a
Olcay's Schematic Representation of the Resource System (her original drawing)
Figure 4b
The reconstructed version of Olcay's SRRS by the researchers When the SRRS diagram is examined, two schemes are hypothesized. One of them is about the teacher's choice of homework resource. She chooses her homework from the resource that students also have access to, as she wrote in the description above the resource. Her theorem-in-action in this scheme is: "Homework should be given from a shared resource." Her concept-in-action related to the scheme is "equal access to homework resource". She also mentioned in the interview that she cared for equal access to homework in her lesson preparation. Accordingly, in her lesson, she only gave homework from the shared book she mentioned before.
She mentioned in SRRS that she chose some of the resources because they were newly published. Her theorem-in-action in this scheme is "New resources should be used to keep up with changing curricula and systems." The associated concept-in-action is "innovation". She also mentioned this situation in the interview about her resources.
The exam system had a direct effect on the variety of resources that Olcay integrated into her lessons. Considering that the exam system and the inadequacy of the official textbooks affected her choice of resources. She chose resources that would make up for the inadequacy of the official textbook or contain explanations aiming to familiarize students with the types of questions in exams. This shows that she had implemented the instrumentalization process. At the same time, if we were to treat the exams as a resource, we can say that they would greatly affect her teaching schemes. In this respect, since the teacher's adapting herself to the exams also comes into question, it is also possible to observe the instrumentation process.
Although Olcay stressed, in the interviews, that resources were critical in mathematics, she only identified two different resources from the official textbook in the SRRS diagram. Olcay expressed this situation in the following way: "I used to examine every resource available to me, such as official textbooks, webpages, supplementary textbooks and video narrations. As I more or less know the content of those, I pay more attention to the main resources that contain the points I want to explain. These two books are satisfactory for me this year." In this statement, another scheme of Olcay can be gathered. She mentioned, "…I pay more attention to the main resources that contain the points I actually want to explain." Her theorem-in-action that constitutes the scheme is "When choosing a resource, the teacher decides according to her teaching method, model and belief."; the associated conceptin-action is "documentation in DAD".
This statement of Olcay also reveals the relationship between the time factor and documentational genesis, which is also included in DAD. Over time, Olcay had eliminated some of the resources, given preference to others, and made decisions thanks to the experience she had gained in this time, showing the effect of the time factor on documentational genesis.
The following statements made by Olcay reveal that she gave importance to making compilations and to using resources containing both easy and difficult questions, as required by the exam system: These comments of Olcay stress the institutional effects included in DAD. Being suitable for the curriculum published by the MoNE is important for Olcay. Besides this, she also wished to assess students with different types of questions. In conclusion, she chooses some of the resources according to the curriculum requirements and selects some of them according to the requirements of the examination system. While the curriculum adopts the constructivist approach and open-ended problems in Turkey, the national exam system comprises multiple-choice questions. This dilemma in the education system is reflected in Olcay's document system. In this case, the teacher's theorem-in-action: "When choosing a resource, both the curriculum and the examination system should be taken into consideration." And the concept-in-action is "The institutional effect in DAD".
It can be seen in Olcay's statements that when planning her teaching, she thought that textbooks including 'word problems, filling in the blanks and completing the tables' helps students' permanent and conceptual learning. Her theorem-in-action is "Word problems, filling in the blanks and completing the tables lead the information to be more permanent." And the related concept-in-action is "conceptual learning".
According to the interviews and the observations, it may also be said that Olcay supported the students in the matter of solving difficult questions by giving extra points. Olcay's associated theorem-in-action is "The resources with difficult questions should be used to reward students." And the concept-in-action is "motivation".
Olcay explained that when selecting and using her resources, she took care to act following the order of the curriculum, with these words:
"I include the learning outcomes directly in my lessons. After a topic has been taught, I give extra information where necessary… Let it be beneficial for next year, I say. Especially in the sixth grade."
Although Olcay stated that she is conformed with the learning outcomes in the curriculum, she also stated that when the learning outcomes were completed, she also taught subjects that would belong to the outcomes of the following year. In this case, she did not avoid including topics outside the schedule because she considered these useful to students in future years. In such a situation, in which her teaching schemes have caused changes to a resource, foreseen as unchangeable like the curriculum, the instrumentation concept manifests itself. Olcay's theorem-in-action for this scheme is "If the part to be told that year is completed, the next year's topic can be told from the previous year." The associated concept-in-action: "Control of the didactic time".
Olcay stated that in choosing her resources, she also paid attention to visual material, as follows:
"…the reason why I use them is that there are visuals since we don't have projectors or computers. In the previous years, I used to introduce topics on the computer and show the visuals there. But here, as we don't have computers, I want them to see the visual materials in the books."
Since the school's physical facilities were not adequate, Olcay, instead of sharing visuals that she could obtain from internet sources, tried to share visual materials included in her textbooks. Besides, both the elements of the concretization scheme of the teacher and the lack of facilities in the institution led to the implementation of concretization via the resources selected by the teacher. This statement of Olcay is important in that it reveals the instrumentation process in DAD. The theorem-in-action of Olcay is "Teachers should use visuals to make students concretize some subjects." The concept-in-action is "concretization".
Olcay explained that in choosing her resources, she preferred resources suited to her conceptions on mathematics teaching, particularly for order of topics, as follows: "I think algebraic expressions should be explained first; then pattern generalization should be taught…In all the resources that I use, algebraic expressions are given first, pattern generalization comes after that. Because students haven't seen it before, when we give them the expression 3n, they cannot convert it into an algebraic expression, so they don't understand the topic." Olcay gave the example of the resources she used, and her choosing and adopting of those textbooks among many resources that came to her school shows that she was more prone to use books that were in parallel with her conceptions. Also, she had this scheme about the mathematical topic, that the theorem-in-action is "Algebraic expressions should be taught before pattern generalization." And the associated concept-in-action is: "ground preparation".
Although Olcay had stated in her previous comments that she reflected the learning outcomes in her teaching and that she did not make any changes to them, she admitted that she wanted to make changes in the order of learning outcomes. But she avoided doing so because it would go against the curriculum. Here again, it is possible to mention the institutional effects that are stated in DAD. Although the teacher's professional view was inclined towards changing the order of the learning outcomes, she behaved compliant to the curriculum defined by the institution.
On the other hand, it may be said that Olcay did not follow some of the collective decisions as to the curriculum. The dialogue given below that took place with another math teacher (MT) while Olcay was planning her lessons supports this thought: It may be said that for Olcay, the group effects stated in DAD are less effective than the institutional effects. Also, she has the scheme that the theorem-in-action is "First, the patterns in the multiplication form (2n, 3n) should be taught, then, the patterns that include plus form (2n+5) should be taught." The related conceptin-action is: "from easy to hard". In the lesson, she also warned the students to start from the examples she presented in the class and then wanted them to move on to the official textbook. Relying on her experience, she stressed the importance of proceeding in a definite order from easy to hard according to the topic she was to teach. She acted in the way she stated in her lessons: "… We're already going to explain number patterns. Straight after this, I'll draw a table and the step number with the number corresponding to that step and have them discover how to find the rule. I'm planning to start with number patterns and then proceed to shape patterns. Then, I'll give problems that don't require a fixed term, followed by problems that require a fixed term. I did it as in the previous years because students would not understand in another way." Olcay planned her lessons based on her experience in a way from easy to difficult. Here, she also expressed the situation stated in the previous dialogue; she will shape her teaching according to her professional viewpoint despite the mutual decisions. The easyto-difficult principle possessed by Olcay affected her resource selection. Here, as she stated that she selected her resources according to her schemes, the instrumentalization process may be mentioned.
Resource sharing of Olcay was not the result of a decision made by herself, but rather due to the mutual decision, she made with her colleagues. However, in the case of those who do not want to use the resource they decided collectively, the teacher decides to use resources in the classroom. Even during the research, when Olcay stated her resources, she requested that books and websites be kept secret in particular. Even this reveals how powerful the institutional effect on the teacher is. The scheme associated with this situation becomes clear with the theorem-in-action "If there is a possibility that sanctions can be imposed on teacher's career by the institution, the use of the resource in the classroom can be put into the second plan." and the concept-inaction "institution rules".
Olcay stated that when she gave problems from the shared resources for homework, she solved them again in the class to make sure that they had been correctly solved: "We give some of them for homework, and we also solve most of them in the class. Even if we give homework, we solve them again in class to check." These statements show that Olcay is sensitive about giving feedback. Here, the scheme is associated with the theorem-in-action "The problems in the assignment must be solved correctly." and conceptin-action "joint correction".
During the lesson, Olcay proceeded as she had planned. However, in some parts of the lesson, she diverged from her plan, solved additional examples, and gave additional explanations. She explained the reason for this as follows: "…In the class, if we had given only one example as in the plan, n wouldn't have understood. As I was unsure whether they would find it, I felt the need to give a second example, to say that n is a variable, a representative number, the term sought. So n may be 15 or 50. A representative number. I wanted to stress that we are showing the number of steps. We even put an asterisk and wrote an explanation about that." Olcay described that the implementations she carried out in the lesson were different from her plan as she revised instantly in the lesson according to the students' level of understanding. Here, Olcay updated her documentation by adding new examples to her teaching. Her changes or arrangements to the resources that she used according to the students' level of understanding constitute an example of the instrumentalization process in this case. The teacher's scheme is associated with the theorem-in-action: "Course content should be based on class level." The concept-in-action is "Adaptation to the class".
Conclusion and Discussion
The schemes can be discussed as internal (particular to the teacher) and external schemes (such as institutional factors). The internal schemes particular to the teacher include schemes such as the teacher's content knowledge, pedagogical content knowledge, and acting by some approaches like easy-to-hard when organizing the lessons. The internal schemes particular to the teacher may be said to show similarity with the factors revealed by Gueudet and Trouche (2009b). However, differences were observed in external schemes like institutional factors and effects of the exam system.
In the study conducted by Pepin, Gueudet and Trouche (2013) related to sharing of resources by teachers, it was stated that teachers especially shared resources with their colleagues. In their study, the researchers revealed that the teacher shared resources with math teachers and physics teachers. She selected exercises that would also be suitable for physics lessons in structuring her lessons. There is no such evidence that Olcay shares resources with different branches in this study, but she shares resources with her colleagues.
As for the scheme related to documentation, there are also studies conducted in the literature about the teachers' selection of resources and classroom practices according to their beliefs and teaching methods (Shaw et al., 2008;İlter, 2018). Shaw et al. (2008) mentioned, in their study, that teachers' practices and the resources they use reflect the beliefs they have about teaching the course.
In the scheme related to the didactic time, it is mentioned in the literature that the teacher keeps the didactic time under control. In the literature, it is stated that especially experienced teachers tend to keep didactic time under control so that students can understand efficiently. Sometimes, they move on to the subjects of the following year (Maurice & Allégre, 2002;Calmettes, 2007). Chevallard (1985) has imposed a godlike character on them, considering that teachers can accurately predict students' understanding periods and the didactic time to be given to a subject (Margolinas, 2002).
The teacher's behavior, similar to the scheme obtained concerning concretization, has also been reported in the literature (Danesi, 2007;Presmeg, 2006;Presmeg, 2008;Usta et al., 2018;Rösken & Rolka, 2006). Danesi (2007), in his theoretical framework on conceptual metaphors, stated that teachers and students tend to concretize verbally given abstract mathematical issues to understand them. He reported that they did this by drawing the data of the given problem, trying to visualize it and making it into an equation. Similarly, Polya (1957, p.174) also emphasized concretization by expressing the path needed to solve a problem as "translating from one language into another".
According to students' level, the scheme of Olcay to structure the lesson is also reported in the literature (Dursun & Dede, 2004). Cohen et al. (2003) mentioned in their study that teachers consider the students' level of learning to make instant arrangements on the lesson plans.
Solving the problems that were given as homework and the wish to be sure the students give the right answer is also mentioned in the literature as a factor that should be considered while giving homework (Ilgar, 2005;Korkmaz, 2004;Schmitz & Baumert, 2002;Turkoglu et al., 2007). Turkoglu et al. (2007) mentioned homework correction techniques in their studies. One of the most important of these techniques is the common correction technique Olcay adopted.
The scheme about using new resources to adapt to changing curriculums and follow innovations is similar to Ozmantar et al.'s (2009) studies, and it is similar to the finding that change in curriculum necessitates a change in the classroom norms.
The institutional effects can be discussed in two aspects. The first one is seen as an element that affects kneading the resources during documentational genesis. In the second, it is seen as an element that interrupts this process. In the first case, it is possible to use resources appropriate for both approaches to eliminate the problems arising from the difference between the curriculum and the national exam system. In the second case, if the resource used will affect the teacher's career negatively by the institution, the use of the resource will be restricted.
Similarly, in the study of Butlen and Vannier (2010), determining the course content appropriate to the curriculum and exam system is regarded as respecting the student's rights for the teacher. It is considered the pressure by the institution. However, it affects the teacher's development of the document system. Similar to the second institutional effect mentioned, a study was conducted at the university level and discussed the impact of the changes in the exam system on the content of the exams (Gueudet & Lebaud, 2008). Although this study is related to the exams, the effect of the institution that limits the content and duration of the exam is more appropriate to the second situation.
Although it differs among schools, it is advised by the school administrators not to recommend any resources to students. It may be attributed to the fact that some students can easily access the resource, and some will not if there is a financial difference among the students in the school. However, the effects of this prohibition at school were observed once again because Olcay hesitated to share the resources with the researcher. It is also notable that the stress experienced by Olcay is also one of the reasons for the teachers' occupational stress and burnout in the psychology literature (Dinham, 1993;Kyriacou, 2001;Louden, 1987;Punch & Tuetteman, 1996;Pithers & Soden, 1999).
Examining the research by Gueudet and Trouche (2009b), it can be seen that the teachers filled in their diaries in the way that was explained to them. Yet, in this study, Olcay filled in her diary similarly to the class notebook she used in the class. Although Olcay included the developing experiences that she considered mathematical in her diary, these sections made up only a small part of her full diary. It is hypothesized that the semi-structured diary given to the teacher reminded her of the schools' class notebook in form. Such a situation did not arise in other studies examining documentational genesis because class notebook concepts did not exist. Even if there were such concepts, they did not resemble the diary in form. Moreover, it was reported in the literature that in the use of a diary as a data collection tool, people had difficulty expressing themselves in a diary in writing (Bolger et al.,2003).
Unlike Gueudet and Trouche's (2009b) research, the participant teacher stated rather few resources in her SRRS diagram. In Gueudet and Trouche's study, the teachers also included internet sources in their SRRS diagrams. Yet, in this research, Olcay did not show these in her SRRS diagram, despite stating that internet resources influenced her lessons in the interview. This situation may be interpreted as although Olcay examined internet resources, she did not regard them as a basic resource influencing her lessons this year. Also, such a concept as "resource book" in Turkey may influence the teacher to mention only the resources in the textbook format in her SRRS diagram. In addition, in the literature, when the resource is mentioned, besides the other meanings of the resource, some studies take the books as the "classic and the usual" version of resources (Drijvers et al., 2013, Maschietto & Soury-Lavergne, 2013Ruthven, 2013).
Furthermore, when representing her resources in the diagram, Olcay used arrows led from the lesson to the resources. But, she stated during the interviews that she had tried to explain that the resources and the lesson have a mutual effect on this representation. (Hammoud, 2012;Rocha, 2018). Also, she placed the lesson in the center of the diagram. It may because she considered the lesson itself as the main resource.
In Turkey, there are many schools with different views regarding resource sharing. This situation caused a conflict between the internal and external schemes possessed by Olcay. In France, where the Authors carried out their study, there isn't such an exam system in Turkey, which may be why factors related to the exam system differed. It can be said that the national exam, which the students were expected to do well in at the end of middle school, considerably affected Olcay's documentational genesis process.
Recommendations for Further Researches
It was observed during the research that teachers were worn out between the curriculum and the exam system. While the approach adopted in the curriculum was process-oriented, the evaluation method was result-oriented, which was an important factor in creating a dilemma for the teachers. For this reason, it is suggested that a study should be conducted to determine how teachers manage the items that are compatible and incompatible with the curriculum and the exam system in future studies and how these differences affect the process of the documentational genesis.
Considering that teachers draw on their previous experience and the questions that have been used in exams from the previous years, it may be said that the exam questions also have the characteristic of being a resource for teachers. In this study, the documentational genesis processes of teachers were examined in the case in which the curriculum outcomes and the exam system did not match. It is considered that it may be important to carry out studies that demonstrate how resources from the national exam system affect the documentational genesis process in matching with the outcomes of the curriculum.
Moreover, if a diary is to be used in the studies carried out with teachers in the Turkey sample, the design of the semi-structured diary should be different as much as possible from the class notebook. In this way, the negative situation that arose in this study can be avoided, and more productive data can be collected from the diaries. The literature also recommended that the information given to teachers about diaries should be detailed, and the diaries should be checked at every stage (Bolger et al., 2003).
For closer and more detailed analyses of the documentational genesis process, longitudinal qualitative studies can be held. Also, this study was conducted with only one teacher. With the increase in the number of such studies, different situations and schemes can be seen, or various situations can be identified that show similar schemes. | 2021-10-27T15:13:16.598Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "a98860c175b01d421644a6fc46331780b112a0f5",
"oa_license": "CCBY",
"oa_url": "https://www.iejee.com/index.php/IEJEE/article/download/1525/556/6221",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e6d248201bc185c2958b8d9630f1dc83a505772e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
260736995 | pes2o/s2orc | v3-fos-license | Quantification of biases in predictions of protein–protein binding affinity changes upon mutations
Abstract Understanding the impact of mutations on protein–protein binding affinity is a key objective for a wide range of biotechnological applications and for shedding light on disease-causing mutations, which are often located at protein–protein interfaces. Over the past decade, many computational methods using physics-based and/or machine learning approaches have been developed to predict how protein binding affinity changes upon mutations. They all claim to achieve astonishing accuracy on both training and test sets, with performances on standard benchmarks such as SKEMPI 2.0 that seem overly optimistic. Here we benchmarked eight well-known and well-used predictors and identified their biases and dataset dependencies, using not only SKEMPI 2.0 as a test set but also deep mutagenesis data on the severe acute respiratory syndrome coronavirus 2 spike protein in complex with the human angiotensin-converting enzyme 2. We showed that, even though most of the tested methods reach a significant degree of robustness and accuracy, they suffer from limited generalizability properties and struggle to predict unseen mutations. Interestingly, the generalizability problems are more severe for pure machine learning approaches, while physics-based methods are less affected by this issue. Moreover, undesirable prediction biases toward specific mutation properties, the most marked being toward destabilizing mutations, are also observed and should be carefully considered by method developers. We conclude from our analyses that there is room for improvement in the prediction models and suggest ways to check, assess and improve their generalizability and robustness.
INTRODUCTION
Proteins interact with each other to form complexes that perform a wide range of biological functions in the intra-and extracellular media, and are involved in key processes such as signal transduction, cell growth and proliferation and cell apoptosis.It is therefore of fundamental interest to understand how amino acid substitutions impact on the ability of proteins to bind to their interacting partners.Such insights would shed light on pathogenic mechanisms since aberrant protein-protein interactions (PPIs) caused by deleterious variants are often central to Mendelian disorders and complex diseases such as cancer [1][2][3][4].From a biotechnological perspective, it would improve the design of drugs that modulate PPIs, as targeting these is an established strategy in the treatment of disease [5,6].
There are several experimental methods for estimating the impact of mutations on PPIs.Biophysical methods such as isothermal titration calorimetry allow in-depth estimation of protein binding thermodynamics [7]; in contrast, high-throughput screening assays such as yeast-two-hybrid systems only allow identification of binary PPIs but have the advantage of being applicable at a large scale [8].However, given that all experimental approaches remain challenging, costly and time-intensive, there is room for computational methods which provide effective alternatives to predict and achieve better understanding of PPIs.
Over the last decade, many studies have been dedicated to the development of bioinformatics tools to predict the impact of mutations on protein-protein binding affinity ( G b ), which is the thermodynamic descriptor of PPIs [9][10][11][12][13][14][15][16][17][18][19][20][21].These tools are mainly based on structural features derived from experimentally characterized protein complexes and/or evolutionary data.These features are usually combined using standard machine learning techniques, but deep learning algorithms are starting to be used in predictor construction [20].
The first attempts to predict protein-protein binding affinity changes upon mutations ( G b ) were based on physical energy functions [22], with predictors such as Rosetta [9] (2002), FOLDEF [10] (2002) and DComplex [11] (2004).The lack of sufficiently large and standardized datasets of experimental G b values prevented them from being trained directly on such data.For this reason, some of them (e.g.DComplex) were completely unsupervised, while others (e.g.Rosetta and FOLDEF) were trained on experimental values of protein stability changes upon mutations ( G) reported in the ProTherm [23] dataset, with the assumption that physical properties of intraprotein interactions are transposable to interprotein interactions at the interface.In this case, experimental data were used only to parameterize the energy functions and to weight their individual contributions.Now, the SKEMPI dataset [24,25] fills this gap.It is considered as the gold standard for training and testing G b predictors.Its first release in 2012, SKEMPI 1.0 [24], collected, curated, selected and standardized entries from literature searches and from already existing datasets (ASEdb [26], PINT [27] and [28]).This first release allowed the development of a generation of G b predictors such as BeAtMuSiC [12] (2013), mCSM [13] (2014), MutaBind [14] (2016) and BindProfX [15] (2017).The large amount of collected experimental values enabled a more extensive use of machine learning methods (e.g. in mCSM), as well as leveraging other nonphysical information to predict energy values.For instance, evolutionary information was extracted from homologous structures (in BindProfX) and sequences (in MutaBind).
While these tools achieve good prediction accuracy on their respective training sets, the extent to which these results are generalizable to unseen data is one of the open issues in the field.Indeed, like all supervised machine learning methods, they are likely to suffer from undesirable biases toward the learning set, which often hinder the generalization of their predictions.One example of this problem is the bias toward destabilizing values of the folding free energy change upon mutations ( G), which has been thoroughly analyzed in a series of investigations [33][34][35].In summary, it has been shown that training protein stability predictors on the common experimental datasets that are dominated by destabilizing mutations leads to much better performance on destabilizing than on stabilizing mutations.
Although prediction biases have been studied for predictors of stability changes caused by mutations, they have not been for protein-protein affinity changes; yet having accurate and unbiased prediction tools of G b values is crucial for a wide range of biotechnological applications.In this paper, we have systematically quantified possible biases in state-of-the-art protein-protein G b prediction methods.More precisely, we evaluated their predictions on a set of mutations with experimentally measured G b values taken from [25], and on high-throughput data on the binding between the human angiotensin-converting enzyme 2 (ACE2) and the receptor binding domain (RBD) of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike protein taken from [36].After an analysis of the methods' performances, we suggest strategies to limit and correct possible biases and thus to further improve the methods' generalizability and scores.
Protein-protein binding affinity change upon mutations
The thermodynamic protein-protein binding affinity G b is a measure of the strength of a PPI and is defined using the Gibbs free energy: where R is the Boltzmann constant, T the absolute temperature (in K) and K D the equilibrium dissociation constant of the PPI.We use the convention that the stronger the interaction, the more negative the value of G b , and express it in kcal/mol.Under the action of a mutation, we define the binding affinity change as where wt refers to the wild-type complex and mt to the mutant.Thus, positive G b values correspond to mutations that destabilize the complex and negative values to stabilizing mutations.Since binding affinity is a thermodynamic state function, mutating from a wild-type complex to a mutant complex and then mutating back results in no change in G b , which is expressed by the following equation: We will refer to this property as the symmetry property.
In what follows, we will call 'direct mutation a mutation that goes from the wild-type to the mutant complex.Conversely, we will call 'reverse mutation' a mutation that goes from the mutant to the wild-type complex.Note that the terms wild-type, mutant, direct and reverse are defined with respect to the proteins that are part of our datasets and do not necessarily have a biological interpretation.
Defining protein-protein interfaces
The relative solvent accessibility (RSA) of a residue in a threedimensional (3D) structure is defined as the ratio (in %) of its solvent-accessible surface area in the structure and in an extended tripeptide Gly-X-Gly [37].We calculated them using our in-house software MuSiC [38] (which uses an extension of the DSSP algorithm [39]), available on the dezyme.comwebsite.We distinguished between interactant-RSA (iRSA) and complex-RSA (cRSA), which correspond to the RSA calculated from the structure containing solely the considered interactant and from the structure containing the complex with both interactants, respectively.We defined the RSA change upon binding as RSA := iRSA − cRSA; it measures how much the PPI changes the solvent accessibility of a residue.A residue is considered to be in the protein-protein interface if its RSA is greater than 5%.
Datasets of binding affinity changes upon mutations
We considered two datasets.The first is based on the SKEMPI sets [24,25], containing mutations in different protein-protein complexes of known 3D structure available in the Protein Data Bank (PDB) [40], whose G b values have been measured experimentally using biophysical methods, performed by various laboratories.The number of characterized mutations in each protein typically ranges from a few to a few dozen, and reaches in rare cases a few hundred [41,42].These datasets yield relatively accurate G b values but have the disadvantage of being unsystematic and of ref lecting the specific interests of the authors in the choice of proteins and mutations.
The SKEMPI 2.0 dataset [25] contains 7085 entries and is the most comprehensive, well-curated and diverse dataset of its kind.First, we discarded entries without G b value and entries describing multiple mutations.We then aggregated all redundant entries (with the same mutation in the same PDB structure) by taking their average G b value.To withdraw the dependency on the quality of the structures, we also dropped all mutations in low-resolution X-ray structures (resolution > 2.5Å) and in structures obtained by nuclear magnetic resonance spectroscopy.This defines our first benchmark dataset called S2536 which contains 2536 mutations in 205 different PDB structures.
The second dataset we considered contains affinity values obtained through deep mutagenesis experiments that systematically characterized all possible mutations in the RBD of the SARS-CoV-2 spike glycoprotein in interaction with the human ACE2 receptor [36].This dataset has the advantage of being systematic and therefore less biased.However, the measured values are not exact G b but close correlates.From this set, we first discarded the mutations of the few residues located in the N-and C-terminal tails of the spike protein, as they are absent from the reference PDB structure 6M0J.We then identified the ACE2-RBD interface residues, of which there are 20, using the above RSA criterion.We focused on all 380 possible mutations of these 20 residues, to define our second benchmark dataset C380.
For both the S2536 and C380 datasets, considered by definition as direct mutations, we constructed the datasets of reverse mutations using the symmetry property Eq. ( 3) to assign a G b value to each reverse mutation.When the distinction is required, we append the suffix -D to the name for a dataset of direct mutations, the suffix -R for a dataset of reverse mutations and the suffix -DR for a dataset of both direct and reverse mutations (e.g.S2536-D, S2536-R and S2536-DR).
Protein 3D structures
For predicting direct mutations in the S2536 set, we used the PDB structures of the protein complexes that have been collected in the SKEMPI 2.0 database, as they were curated to be as close as possible to the protein complexes on which the measurements were made.For direct mutations in the C380 set, we used the experimental 3D structure of the ACE2-RBD complex with PDB ID 6M0J [43], as referenced in [36].
For reverse mutations, we modeled the mutant complexes using the comparative modeling software MODELLER [44] with default parameters and the wild-type structures as templates.MODELLER reconstructs the side chain of the mutated residue, then slightly rearranges the backbone and the side chains of the complex to avoid steric clashes and to optimize atomic interactions with the new mutated residue.Since the template and mutant structures differ by only one mutation, the resulting model remains very close to the initial structure.
Prediction methods tested
We benchmarked the eight best-known, available and widely used G b predictors published in recent years.We brief ly describe their characteristics.mCSM-PPI2 [16] is a machine learning predictor that uses graph-based structural signatures of the inter-residue interaction network, evolutionary information, complex network metrics and energy terms.
MutaBind2 [17] uses seven features including protein-solvent interactions, evolutionary conservation and physics-based thermodynamic stability.
BeAtMuSiC [12] is our in-house predictor.It estimates the G b as a linear combination of the stability changes upon mutations ( G) of the protein complex and of the individual interactants, computed by the PoPMuSiC predictor [45].It uses statistical energy functions for G estimation, derived from the Boltzmann law which relates the frequency of occurrence of a structural pattern to its free energy.
SSIPe [18] combines protein interface profiles obtained from structure and sequence homology searches with physics-based energy functions.SAAMBE-3D [19] is a machine learning-based predictor that utilizes 33 knowledge-based features representing the physical environment surrounding the mutation site.
NetTree [20] is a deep learning method based on convolutional neural networks and algebraic topology features.It uses elementand site-specific persistent homology to represent the structure of a protein complex and to translate it into topological features.
FoldX [46] is a purely physics-based method that uses empirical energy functions to predict G b as described in the FOLDEF paper [10].Its energy terms are defined by theoretical models (e.g. the van der Waals potential energy function), which are parameterized and weighted using empirical data.
BindProfX [15] combines the FoldX prediction and a profile score based on structural interface alignments obtained by the iAlign software [47].The profile score exploits evolutionary information by comparing the frequencies of occurrence of the wildtype and the mutant amino acids in structurally similar interfaces.BindProfX is only applicable to protein dimers; when applied to higher order multimers, we use the FoldX term only.
These predictors can be classified into three groups based on the nature of their approach: mCSM-PPI2, MutaBind2, SAAMBE-3D and NetTree are machine learning predictors whose features are extracted from protein structures, physics and evolution; SSIPe and BindProfX linearly combine an evolutionary term and a physics-based energy term using G b data to optimize their models; BeAtMuSiC and FoldX are pure physics-based predictors.
In terms of training set, we have the following classification: NetTree was trained on antigen-antibody interaction data from the AB-Bind dataset [29] which is partially included in the SKEMPI 2.0 dataset; FoldX was trained on G data from ProTherm [23], however note that it has been updated several times since its first publication [10] in 2002 and it is unclear whether or not the current version (v5) [48] has used G b data for parameterization; BeAtMuSiC was also trained on G values, with only two parameters to balance interprotein and intraprotein contributions adjusted using SKEMPI 1.0 G b values; BindProfX was trained on SKEMPI 1.0 entries; all other predictors were trained on SKEMPI 2.0.Finally, mCSM-PPI2 and MutaBind2 included reverse mutations in addition to direct mutations in their training datasets.
An upper bound to the accuracy of predictors
Binding affinity change values collected from the literature and available in S2536 are derived from experiments performed using different techniques and under different environmental conditions such as pH, temperature or solvent additives.These differences add to the experimental error and usually lead to different G b values for the same mutation in the same protein complex.Furthermore, although SKEMPI 2.0 is particularly well curated, curation errors cannot be avoided, as illustrated by the error corrections between SKEMPI 1.0 and SKEMPI 2.0 (see Supplementary Section 1).The uncertainty on G b values places an upper bound on the precision of the predictions, which cannot exceed the accuracy of the experimental data.
An analytical method for estimating the upper bound on the Pearson correlation coefficient (ρ), which measures the strength of the linear relation between predicted and target values, and the lower bound on the root mean squared error (RMSE), which is a measure of the average error of a prediction, has recently been proposed [49,50].These bounds are expressed as where σ 2 DB is the variance of G b values in the whole dataset and σ 2 is the mean of the individual variances for redundant entries.We estimated the values of these bounds using the 116 redundant clusters with at least three entries among all single mutations from the SKEMPI 2.0 dataset.
We obtained: sup(ρ) = 0.89 and inf(RMSE) = 0.70 kcal/mol.Note, however, that these bounds are probably overestimated and underestimated, respectively, due to an underestimation of σ 2 .Indeed, only independent, uncorrelated, G b measures of a given mutation can yield a correct estimation of the variance, which seems to not be always the case.
The performances of the tested predictors presented in the following sections can be compared with these 'optimal' values.It should be stressed that an accuracy better than these bounds suggests that the predictor is overfitted toward the dataset.A good prediction should thus have a Pearson correlation significantly above zero but below the upper bound of 0.89.It is also expected to have an RMSE value above the lower bound of 0.70 kcal/mol.To give the reader an intuitive idea of the scale of the RMSE, we note that a predictor that consistently predicts G b to be zero would obtain RMSE values of 2.3 and 1.8 kcal/mol on S2536 and C380, respectively.
Biases in the S2536 dataset
As mentioned by the SKEMPI authors [24,25], mutations characterized and reported in the literature are not systematic but ref lect the interests of the experimenters.The collected data have therefore biases toward specific residues, mutation types, spatial locations, proteins and protein families.These biases can lead to overoptimistic assessments of the predictors, even when strict cross-validation methods are used.Indeed, if training and test sets are subject to the same biases, a predictor can learn and replicate them, increasing both its apparent performance and generalization error.This can lead to a gap between the performances estimated from either a biased test set or a set of systematic mutations, raising concerns about the reliability of predictors.In this section we have quantified and discussed some of the biases in the S2536 mutations set.
First we note the imbalance in terms of mutation types.The occurrences of the 380 possible mutation types in S2536 are shown in Figure 1A.Half of the mutations are toward alanine, 222 mutation types occur less than five times and 92 mutation types are not represented.This tendency is related to the prevalence of experimental alanine-scanning data in S2536.It may weaken the predictions of underrepresented mutation types.
Another notable imbalance is toward mutations located at protein-protein interfaces: 78% of S2536 entries are mutations of the 9% of residues located at the interface.Although interface residues are usually more critical for the interaction, noninterface regions can also be important and their effects risk being overlooked by the predictors.
Finally, the G b distribution is largely shifted toward positive values, as shown in Figure 1B.It has a mean value of 1.11 kcal/mol and a standard deviation of 1.99 kcal/mol with clear prevalence of destabilizing mutations.This imbalance is not surprising as experimentally studied complexes are often optimized for high binding affinity by evolution.However, it tends to cause predictors to systematically output destabilizing G b values even for neutral and stabilizing mutations, thus preventing the symmetry property (Eq.( 3)) from being satisfied.This issue, which is particularly problematic for, e.g.rational protein design, has been identified and widely investigated in the context of stability changes upon mutations [33][34][35][51][52][53].In the next sections, we will examine this in the context of changes in binding affinity.
Note that these imbalances were observed in S2536, but also occur in all single-site mutations of the SKEMPI 2.0 dataset (see Supplementary Section 2).
Performances on SKEMPI 2.0
We tested the performances of the eight selected predictors described in Methods (Section 2) on the direct and reverse mutations of the S2536 benchmark dataset.For that purpose, we used the Pearson correlation coefficient between predicted and experimental G b values (ρ) as performance metric.The results are represented in Figures 2-3 and Table 1.Other metrics such as the RMSE and the Spearman rank correlation (r) lead to the similar conclusions (as shown in Table 1 and https://github.com/3BioCompBio/DDGb_bias).
This benchmark, though informative, should be considered with caution, as the extent of cross-validation differs according to the predictor.The main issue is that each of the benchmarked predictors is trained on a different subset of S2536, with various covering ratios (CR) with respect to the subset of direct (S2536-D) and reverse (S2536-R) mutations (Table 2).For instance, the training set of mCSM-PPI2 contains 99% of the S2536-D mutations, while that of NetTree only 10%.Furthermore, mCSM-PPI2 is trained on almost all reverse mutations of S2536-R and MutaBind2, on the fraction necessary to balance the number of stabilizing and destabilizing mutations.
The best-performing predictors on the direct mutation set S2536-D are mCSM-PPI2, MutaBind2 and SAAMBE-3D with Pearson correlations ρ of 0.91, 0.90 and 0.88, respectively.These values exceed or are very close to the upper bound of 0.89 (Eq.( 4)), which suggests some overfitting toward the training set.They are followed by BindProfX, SSIPe, FoldX, BeAtMuSiC and NetTree.
We observe that the performance of all predictors but SSIPe and BindProfX significantly drops when tested on reverse S2536-R mutations.The magnitude of the drop indicates how much each predictor is biased toward direct mutations, which are mostly destabilizing.mCSM-PPI2 and MutaBind2 perform the best on S2536-R, which is expected since they have reverse mutations in their training set; the performance of mCSM-PPI2 drops less than that of MutaBind2, probably because the latter has seen only a part of the reverse mutations during training.Surprisingly, SSIPe and BindProfX are the most robust toward reverse mutations, with almost no drop in performance, although they do not use reverse mutations in training; their robustness is therefore not acquired by training but rather by the symmetry properties of the model.In contrast, BeAtMuSiC, SAAMBE-3D and NetTree basically fail to predict the G b of reverse mutations.Note the particularly huge drop in performance of SAAMBE-3D, whose Pearson correlation decreases from 0.88 to 0.11; this predictor appears thus to be heavily biased toward destabilizing mutations.
This first benchmark shows that a bias toward destabilizing mutations is present in the context of G b predictions.Note that the drop in performance observed when passing from direct to reverse mutations can partly be attributed to this bias but also to the presence of a larger proportion of mutations in S2536-R than in S2536-D which are unseen during training.
For the six methods trained on G b data (mCSM-PPI2, MutaBind2, SSIPe, SAAMBE-3D, NetTree and BindProfX), the covering ratio CR between training and benchmark datasets accurately predicts the performances of the predictors.Indeed, we found an almost linear relationship between the CR of the six predictors and their Pearson correlation ρ on the S2536-D set, with a coefficient of determination R 2 as high as 0.91 (Figure 4).
While this observation does not prove that these predictors are dataset specific and overfitted, it raises some concerns about their ability to generalize to mutations outside the training set.Therefore, further investigation based on a dataset of more systematic and unseen mutations is required: this is the topic of the next subsection.
Performances on SARS-CoV-2 mutations
The C380 dataset has two major advantages over S2536: it is unknown to the eight benchmarked predictors and it is systematic in terms of mutation types.This makes it a better dataset to As shown in Figure 2, the performances of all predictors but NetTree drop from S2536 to C380, with no score higher than 0.6 The performance comparison between direct and reverse mutations of C380-D and C380-R confirms the conclusions of the previous section: all predictors suffer, to a different extent, from a bias toward destabilizing mutations.A way to quantify this bias for a given predictor is to compute the symmetry violation defined by Eq. ( 3) by computing the shift δ defined as averaged over all C380 dataset entries.While some f luctuations in δ are expected and acceptable, a systematic deviation of the mean shift δ from zero quantifies the asymmetry of a predictor and its bias toward stabilizing or destabilizing mutations.A perfect unbiased value for δ is zero; its 'worst-case' value can be estimated as twice the average G b value in the dataset of direct mutations, which is 1.24 kcal/mol in C380.We thus estimated the 'worst-case' δ-value to be about 2.5 kcal/mol.We show in Figure 5 the distributions of δ-values for the eight predictors on C380.Analogous δ-values distributions are depicted for S2536 in Supplementary Figure S-5.We observe that all predictors have a statistically significant shift toward destabilizing mutations, with a vanishing p-value, but amplitude of the shift widely varies.The most symmetric predictors are, as expected, those that perform best on reverse mutations: MutaBind2 with δ = 0.28 kcal/mol followed by mCSM-PPI2 with δ = 0.47 kcal/mol.This confirms that the usage of reverse mutations for training can largely reduce the asymmetry of the predictions.More biased predictions are observed for FoldX, SSIPe, BeAtMuSiC, BindProfX and SAAMBE-3D, with δ = 1.20, 1.28, 1.44, 1.49 and 1.62 kcal/mol, respectively.These values indicate a bias toward destabilizing mutations, which is, however, still significantly lower than the 'worst-case' bias.This means that such predictions are still able to distinguish the tendency between a set of mostly stabilizing and mostly destabilizing mutations.In contrast, NetTree obtains δ = 4.05 kcal/mol, which is largely above the 'worst-case' bias and ref lects its inability to distinguish stabilizing from destabilizing mutations.This particularly large δ -value can partly be explained by NetTree's tendency to predict very large G b values of about 2 kcal/mol, much higher than average experimental values.
In summary, this benchmark represents a fair and objective way to evaluate the performance of the predictors, since C380 is unknown to all.It confirms the presence of biases toward destabilizing mutations in the state-of-the-art G b predictors and highlights the two predictors mCSM-PPI2 and MutaBind2 that are the least affected by this bias.
Performances and biases toward mutation properties
We investigated the predictors' performances on subsets of S2536-D containing mutations sharing similar properties, i.e. mutation type, mutation location and type of complex, in order to highlight the predictors' strengths and weaknesses.As the standard deviations σ of the experimental G b values widely differ according to the subset, we used the normalized RMSE defined as nRMSE := RMSE/σ to assess the predictions.The results are shown in Figure 6.All observations discussed below are statistically significant with almost vanishing P-values (< 0.0001).
We first analyzed separately the subset of mutations toward alanine and the subset of other mutations.As seen in Figure 6A, no substantial differences are observed between these two subsets, except that MutaBind2 and SAAMBE-3D perform slightly better on the latter subset.This might be explained by actual strengths/weaknesses of the predictors or could suggest a mild overfitting, since it is easier to memorize G b values on underrepresented mutation types.
Most predictors are slightly weaker on mutations outside the protein-protein interface (Figure 6B).This is foreseeable, since effects on binding affinity of non-interface mutations are indirect an thus more difficult to predict.MutaBind2, BindProfX, SAAMBE-3D and NetTree suffer from the largest increase in nRMSE.In contrast, BeAtMuSiC and FoldX present similar performances on both subsets.SSIPe shows a surprisingly small drop in performance on mutations outside the interface, although it explicitly claims to be only able to predict interface mutations.
When comparing mutations in dimers to mutations in higher order multimers (Figure 6C), we observe that mCSM-PPI2, BeAtMuSiC and FoldX are the most stable and that MutaBind2, SSIPe, SAAMBE-3D and BindProfX show the largest performance drop.SSIPe's poor performance on higher order multimers is not surprising as it explicitly announces not to predict such mutations.BindProfX's drop is related to the fact that its predictions on higher order multimers are taken from FoldX (see Methods).Paradoxically, mCSM-PPI2 does not require specifying which chains make up the two interactants, although higher order multimers have several protein-protein interfaces and so there is an ambiguity.In spite of this, it maintains the same performance on both subsets, which could suggest overfitting toward its training dataset.In contrast, MutaBind2 asks the chains included in the interactants, but has the largest performance drop on higher order multimers.
We also assessed the performances on other S2536-D subsets, partitioned by secondary structure, solvent exposure in the complex and interface sub-regions [54] (definitions in Supplementary Section 4), but no relevant observations where found.Results are available at https://github.com/3BioCompBio/DDGb_bias).
Strategies for avoiding biased predictions
To ensure the generalizability of the predictions, k-fold crossvalidation procedures should be carefully performed, avoiding blindly splitting the training set.Indeed, when separating a dataset into folds, a direct mutation and its corresponding reverse mutation should end up in the same fold to avoid that information from one mutation inf luences the prediction of the other.As the S2536 dataset contains multiple homologous complexes differing by only a few mutations, random cross-validations can also lead to information leaks from training to testing sets and provide overoptimistic results.Thus, mutations on homologous complexes should also be kept in the same fold [24].
However, dataset biases can be learned by the predictors even if a strict cross-validation procedure is used.To illustrate this, we started by noticing that half of the mutations from S2536-D are toward alanine (X → A) and thus that half of the mutations from S2536-R are from alanine (A → X).Knowing moreover that S2536-D and S2536-R contain mostly destabilizing and mostly stabilizing mutations, respectively, the sign of G b can be often correctly guessed for X → A and A → X mutations while holding no predictive power.In other words, predictors can learn imbalances and cross correlations between mutations' properties from S2536, which improves its performances in cross-validation while also increasing its generalization error.
As a proof of this phenomenon, we created a 'perfectly biased' predictor, which estimates G b as the mean of the experimental G b values of the same mutation type in the training set (or zero if the mutation type was never encountered).This predictor manages to obtain a Pearson correlation ρ = 0.46 on S2536-DR in 10-fold cross-validation.When applying the same predictor (trained on S2536-DR) on mutation type-balanced, interface-only entries from C380-DR, the Pearson correlation falls to ρ = 0.35, and completely vanishes when dropping the interface filter and applying the predictor to the whole dataset of mutations on the RBD-ACE2 complex (-DR) with ρ = 0.04.The same phenomenon also happens, with however slightly smaller correlations, when considering direct mutations only.We indeed found ρ = 0.34 in 10fold cross-validation on S2536-D, ρ = 0.27 on C380-D and ρ = 0.05 on RBD-ACE2 (-D).Note that these scores are only an underestimation of how dataset-dependent cross correlations from S2536 can impact predictions; we have indeed only considered mutation type-related biases.
As extensively discussed above, asymmetric predictions are another type of unwanted bias.One easy way to avoid it is to symmetrize the prediction results.Indeed, the prediction shift δ vanishes when redefining the prediction of a mutation wt → mt as with, as a consequence, δ = G b wt→mt + G b mt→wt = 0.This operation requires both wild-type and mutant structures, but does not introduce any internal modifications to the predictor itself.Some but not all mutant structures have been resolved experimentally; we listed in the https://github.com/3BioCompBio/DDGb_bias repository the pairs of resolved wild-type and mutant structures from SKEMPI 2.0 that are separated by a single mutation (more details in Supplementary Section 5).Alternatively, the unavailable mutant structures can be modeled with homology modeling techniques using the wild-type structure as a template.Symmetrized versions of all tested predictors were obtained using Eq.(7).For predictors that suffer from a strong bias toward destabilizing mutations, the Pearson correlation coefficient of the symmetrized version falls somewhere between their scores on direct and on reverse mutations.In contrast, the least asymmetric predictors, mCSM-PPI2, MutaBind2, BindProfX, FoldX and SSIPe, show a significantly improved score on the reverse datasets S2536-R and C380-R, as well as on the combined datasets S2536-DR and C380-DR, and similar or only slightly lower performance on the direct datasets S2536-D and C380-D (Supplementary Section 3).This shows that the overall performances of some predictors can be improved while also increasing their symmetry without introducing any internal changes to the model.
As seen in the previous subsections, an alternative strategy to reduce the asymmetry of the predictions consists in using reverse mutations for training.Among the tested predictors, MutaBind2 and mCSM-PPI2 apply this technique and reach good symmetry properties.This practice increases the generalizability and robustness of predictors.However, the symmetrization of the training set has to be done carefully.Indeed, due to the presence of wild-type/mutant pairs in SKEMPI 2.0, adding the reverse of all mutations, as done in mCSM-PPI2, leads to redundant entries that should be avoided, as they are a source of biases.
Predictors' computational efficiency
Computational time efficiency is another characteristic to consider when choosing a prediction method, especially when a large set of mutations has to be analyzed, as for example in the study of variants impact on the interactome [2].In terms of speed, BeAtMuSiC and SAAMBE-3D are fast enough to enable large-scale computational mutagenesis experiments; indeed, they are able to predict all possible single-site mutations in a protein complex in a few to a few tens of seconds.While FoldX is significantly slower, it still can perform all mutations in a small protein complex in about a few hours.In contrast, mCSM-PPI2, MutaBind2, SSIPe, NetTree and BindProfX are time-consuming and require tens of seconds to tens of minutes to run a single mutation.This prevents their use for large-scale applications.
CONCLUSIONS
In the last decade, the computational prediction of how mutations impact protein-protein binding affinity have experienced substantial improvements.Due to the large amount of experimental mutagenesis data generated and the development of new machine learning algorithms and accurate force fields, many G b predictors that reach good performance have been developed and used in biotechnological and biopharmaceutical applications.
However, as clearly illustrated in our benchmarking analyses, the predictive power of a method is not necessarily well represented by its scores on its training dataset even if a strict cross-validation procedure is used.This makes the validation process particularly challenging.Here we identified two main issues, which are the predictors' systematic asymmetry and their lack of generalization on mutations outside their training set.They are discussed below.
Lack of generalization.A major challenge in G b predictions is to distinguish between statistical relations that are datasetdependent and the 'true' ones that have a biological meaning.We would like to stress that, while physics-and evolution-based methods are at least partly equipped to tackle this problem, pure machine learning methods struggle to make this distinction.This can explain the particularly large performance drop on unknown mutations observed for most purely machine learning methods such as SAAMBE-3D and the good generalizability properties observed for methods that are totally or partly physics-based, such as BeAtMuSiC, BindProfX and FoldX.
The generalizability of a predictor must be tested on independent sets of mutations outside the training set.Sets of systematic mutations obtained by deep mutagenesis experiments, such as C380, have the advantage of not being impacted by literature biases.They are thus appropriate for validating and benchmarking predictions, even though their G b values are less accurate than those obtained by individual thermodynamic experiments.
Symmetry properties.Symmetry properties should be carefully checked when constructing a prediction model.One way to assess them is on the basis of the shift δ (Eq.( 6)).As a general rule, the symmetry of a predictor can be achieved by (1) using symmetric data during training by including all or a fraction of reverse mutations, as done in mCSM-PPI2 and MutaBind2; (2) enforcing symmetry in the predictor's mathematical model, as in [33]; and (3) applying symmetry-correction methods through, e.g. the symmetrization defined in Eq. (7).Method (1) is a good practice which, as we showed, can increase the generalizability of the predictions.Method (2) can help the predictor to be symmetric, but it is only applicable when the mathematical expression of the model is known.Method (3) is the easiest to implement, but is efficient only if the predictor is already robust to symmetry.
There are additional challenges that need to be addressed.First, further data on binding affinity and interactions need to be collected.Accurate G b thermodynamics data have not been systematically collected for the past 5 years, after SKEMPI 2.0's release.Also, deep mutagenesis data of binding affinity are currently generated at a high rate but need to be collected, curated and harmonized.Secondly, the interpretation of G b prediction models is an issue that we do not explore in this paper and that is not sufficiently discussed in the literature.Indeed, performance is not the only criterion for evaluating a prediction model.Insights into model interpretation can help gaining physical understanding of molecular recognition and protein-protein binding mechanisms.
Finally, there is a need for more independent assessments.We invite the community to set up blind challenges for the prediction of changes in protein-protein binding affinity upon mutations, similar to what has been done during the 26 th critical assessment of predicted interactions (CAPRI) experiment [55].These community-wide blind challenges provide important insights into whether and how different predictors achieve the targeted accuracy, and help drive the development of new methods.
Key Points
• Predicting the impact of mutations on protein-protein binding affinity has seen substantial progress over the past decade, but still faces challenging issues.• Although many predictors achieve good performance on their training set, even in cross validation, they usually struggle to generalize to unseen data.• Most predictors are biased, especially toward mutations that destabilize protein-protein complexes, as their training sets are dominated by them.• Further strategies to limit biases are proposed to improve prediction performance.• Current machine learning-based approaches suffer more from training set overfitting issues than physicsbased methods which generally demonstrate better generalizability properties.
Figure 1 .
Figure 1.Characteristics of the S2536 dataset.(A) Number of occurrences of mutation types; (B) Distribution of the experimental G b values (in kcal/mol).
Figure 2 .
Figure 2. Pearson correlations ρ between experimental and predicted G b values on direct (in blue) and reverse (in orange) mutations of S2536 (left) and C380 (right).
Figure 4 .
Figure 4. Relation between the covering ratio CR and the Pearson correlation ρ between predicted and experimental G b values on the S2536-D set for six benchmarked predictors.The linear regression line (dashed) and coefficient of determination (R 2 ) are indicated.
Figure 5 .
Figure 5. Distribution of the shift δ (in kcal/mol) for the eight benchmarked predictors calculated for mutations from C380.The vertical blue dashed lines indicate δ = 0 and the vertical red dashed lines, the value of δ .
Table 1 :
Performances of the eight benchmarked predictors measured by the Pearson correlation (ρ), the Spearman rank correlation (r) and RMSE on the datasets S2536-D, S2536-R, C380-D and C380-R
Table 2 :
Year R, we further explored the predictors' bias toward destabilizing mutations; by comparing performances on mutations from S2536 and C380, we estimated the dataset dependence of the predictors.Predicted values and performance metrics on C380 are available on https://github.com/3BioCompBio/DDGb_biasandpredictionsare graphically represented in Supplementary FigureS-4. | 2023-08-10T13:08:40.180Z | 2023-11-22T00:00:00.000 | {
"year": 2024,
"sha1": "2af031dcbbbe5c5d445134792412f877a9b4f69e",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bib/article-pdf/25/1/bbad491/55280170/bbad491.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "29f706bfe7b5ada544c95609dd477a613dbc9fc8",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
1266214 | pes2o/s2orc | v3-fos-license | Elderly patients undergoing mechanical ventilation in and out of intensive care units: a comparative, prospective study of 579 ventilations
Introduction Many mechanically ventilated elderly patients in Israel are treated outside of intensive care units (ICUs). The decision as to whether these patients should be treated in ICUs is reached without clear guidelines. We therefore conducted a study with the aim of identifying triage criteria and factors associated with in-hospital mortality in this population. Methods All mechanically invasive ventilated elderly (65+) medical patients in the hospital were included in a prospective, non-interventional, observational study. Results Of the 579 ventilations, 283 (48.9%) were done in ICUs compared with 296 (51.1%) in non-ICU wards. The percentage of ICU ventilations in the 65 to 74, 75 to 84, and 85+ age groups was 62%, 45%, and 23%, respectively. The decision to ventilate in ICUs was significantly and independently influenced by age (Odds Ratio (OR) = 0.945, P < 0.001), and pre-hospitalization functional status by functional independence measure (FIM) scale (OR = 1.054, P < 0.001). In-hospital mortality was 53.0% in ICUs compared with 68.2% in non-ICU wards (P < 0.001), but the rate was not independently and significantly affected by hospitalization in ICUs. Conclusions In Israel, most elderly patients are ventilated outside ICUs and the percentage of ICU ventilations decreases as age increases. In our study groups, the lower mortality among elderly patients ventilated in ICUs is related to patient characteristics and not to their treatment in ICUs per se. Although the milieu in which this study was conducted is uncommon today in the western world, its findings point to possible means of managing future situations in which the demand for mechanical ventilation of elderly patients exceeds the supply of intensive care beds. Moreover, the findings of this study can contribute to the search for ways to reduce costs without having a negative effect on outcome in ventilated elderly patients.
Introduction
Mechanical ventilation is the highest priority indication for admission to ICUs according to accepted guidelines [1]. In Israel the shortage of ICU beds, taken together with the growing number of patients who need them, has led to a state in which the threshold for ICU-refusal for ventilated elderly patients is much lower than might be expected in accordance with the consensus statement [2]. As a result, a significant percentage of ventilated elderly patients are treated outside the ICU. This reality, which is very common in Israel but much less so in the rest of the western world is, as would be expected, not well reported in the literature. The vast majority of series dealing with mechanical ventilation primarily addresses patients in ICUs [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], and only a few papers also describe patients ventilated outside these units [22,23]. In a comprehensive review of the literature we did not find a single study that included all ventilated elderly patients and compared those treated in ICUs with those who were not. The present study was designed to address this deficiency in the literature.
The aims of this prospective study were: (a) to measure the extent of mechanical ventilation in ICUs compared to non-ICU wards among all elderly patients who were ventilated for medical reasons; (b) to determine which characteristics affect the decision to admit ventilated elderly patients to the ICU; and (c) to determine the factors that affect in-hospital mortality in the combined population and whether admission to the ICU is one of those factors.
Study population
All hospitalized patients 65 years or older who underwent tracheal intubation for mechanical ventilation during the study period for reasons unrelated to trauma and/or surgical intervention, were included in the study. Stroke patients ventilated due to respiratory failure were excluded. The patients were ventilated in seven internal medicine wards (320 beds in all), in the general ICU (medical and surgical, 12 beds in all), in the medicine ICU (eight beds), and in the intensive coronary care unit (ICCU, seven beds) in the Soroka University Medical Center in Beer-Sheva, a 1,100 bed tertiary hospital in southern Israel. Patients who had a permanent tracheostomy were included in the study only if they breathed spontaneously during the month prior to hospitalization. Patients who underwent tracheal intubation and mechanical ventilation during the course of cardiopulmonary resuscitation were included in the study only if the ventilation continued for more than two hours after the conclusion of the resuscitation. The study staff was not involved in any way in the decision to ventilate the patients or in the decision as to the site (ICU or non-ICU) in which they were ventilated. The study was approved by the Committee for Research in Human Beings (the Helsinki Committee) of the Soroka University Medical Center that waived the need for informed patient consent for this study.
Study protocol
The study was a prospective, observational, non-interventional survey. Every morning throughout the study period a research staff member went through all the study wards and units and identified patients who began mechanical ventilation the previous day and met the inclusion criteria. For these patients a broad range of data was collected, as detailed below. The data sources were bedside records and patient charts, interviews with the patient's family and/or caregivers, and the computerized patient database system (medical and administrative) in the community and in the hospital. All the data were collected, entered into the study database, and analyzed by a computerized system.
Collected data
The following data were collected for each of the ventilated patients in the study population: demographic data, the setting from which the patient came to the hospital (community, nursing care), use of home oxygen, previous mechanical ventilation, chronic diseases and their severity as quantified by the Charlson score [24], pre-hospitalization functional status (two weeks before the present hospitalization) by the FIM scale [25], the medical indication for mechanical ventilation, the physiological condition of the patient on the first day of ventilation by APACHE II score [26], and in-hospital mortality.
Classification of ventilation
For the purposes of this study, ventilation was classified as ICU ventilation if at least one of the following three conditions was met: (a) ventilation in an ICU continued for at least 48 hours, (b) the entire period of ventilation took place in an ICU, even if it was less than 48 hours, and/or (c) the patient died while being ventilated in an ICU (unrelated to the amount of ventilation time there). Any ventilation that did not meet at least one of these three conditions was classified as non-ICU ventilation. Repeat ventilation during the course of the same hospitalization was considered as the same ventilation. Ventilation during another hospitalization for the same patient, during the course of the study, was considered separate ventilation.
Non-ICU ventilation set-up
The set-up for ventilation in the internal medicine wards included three to four patients who were treated in the same room under the supervision of a nurse who was trained in the care of patients of this type. The indications for mechanical ventilation, the ventilation technique, and the ventilation machines were identical to those in the ICUs. The patients were under continual electrocardiographic (ECG) monitoring and vital signs were measured every few hours. Central venous lines were inserted when indicated, but arterial lines and Swan-Ganz catheters were not used. The doctors who treated these patients in the internal medicine wards treated 40 to 50 other patients in the ward as well. During regular daytime work hours these patients are attended by four-to-five doctors and during the night by one doctor. All internal medicine doctors undergo training in medical ICUs as part of their professional development and are skilled in the management of mechanically ventilated patients.
Statistical analyses
All collected data were entered into an EPI-DATA database. Comparison of the variables between ICU and non-ICU ventilations was conducted by the chi-square test or one-way analysis of variance (ANOVA) in accordance with the type of variable.
Multivariate logistic regression models were used to estimate the independent (adjusted) effects of patients' characteristics on the outcomes (hospitalization of a ventilated patient in an ICU and in-hospital mortality of ven-tilated patients). The models included variables that were found to have a significant association in the univariate analysis as well as those that had clinical significance (listed in the Results section). SPSS (Statistical Package for the Social Sciences, SPSS Inc, Chicago, IL, USA) statistical software (Version 14.0) was used for data processing and statistical analysis. Statistical significance was set at P < 0.05 throughout.
Results
In the course of two years between 1 July 2004 and 30 June 2006, there were 51,723 hospitalizations in the internal medicine wards of the Soroka University Medical Center and 909 ventilations for medical indications (stroke excluded) were recorded in patients aged 18 years or older. Of these ventilations, 330 (36.3%) were of patients 18 to 65 years of age. In accordance with the study definitions 277 (83.9%) of these were ICU ventilations.
The 579 other ventilations were done in 553 elderly patients 65 years or older (20 patients had two ventilations and two patients had three ventilations each in different hospitalizations during the study period, with an interval of at least six months between any two episodes). This group of ventilations comprised the study population. Of these ventilations, 283 (48.9%) were ICU ventilations compared with 296 (51.1%) non-ICU ventilations. Figure 1 presents all 909 ventilations divided between young (18 to 65 years) and elderly patients and into three sub-groups among the elderly patients. These four groups were compared in relation to the percentage of ICU ventilations. The graph demonstrates dramatically that the percentage of ICU ventilations dropped sharply with increasing age.
Of the 296 non-ICU ventilations there was a documented explanation in 172 cases (58.1%) for the decision by an ICU physician not to admit the patient to an ICU. In each of these cases the reason for the decision was either that the patient was not suited for an ICU or that no place was available in an ICU at the time. In the other 124 cases (41.9%) the ward physicians decided not to request transfer to an ICU. The reasons for this decision (obtained by direct questioning by the investigators) were that the case did not justify use of an expensive ICU bed and/or their impression, in light of familiarity with the decision process by ICU physicians, that there was no chance that the patient would be accepted to an ICU. Table 1 presents a comparison of demographic characteristics and background medical information and the pre-hospitalization functional status for the two study groups. The distribution of the functional status is shown by grouping the FIM score into three functional conditions in addition to the total motor and cognitive FIM scores. In each of these presentation formats there is a conspicuous difference in the functional status between ICU and non-ICU ventilations, in which the latter had a significantly lower pre-hospitalization functional status. Table 1 also presents a comparison of the Acute Physiology and Chronic Health Evaluation II (APACHE II) scores in the first day of ventilation in the two study populations by total score and by three component subscores. The mean total score was higher among the non-ICU ventilations and the difference was very close to statistical significance. Table 2 presents a comparison of the distribution of diagnoses that led to mechanical ventilation in the two study groups. Significant differences were found for four of seven variables: respiratory insufficiency secondary to sepsis, pulmonary edema, community-acquired pneumonia and cardiogenic shock. Table 3 presents the results of the multivariate analysis with ICU ventilations as the dependent variable. The predictors in this analysis were age, sex, the Charlson score, hospitalization from a nursing home, use of home oxygen, previous mechanical ventilation, the patient's prehospitalization functional status by FIM scale, the Acute Physiological score from the APACHE II score, and the presence and absence of one of seven clinical diagnoses (detailed in Table 2) that were the reason for ventilation. Only the two predictors detailed in the table had a significant and independent effect on the decision to treat the patient in an ICU ward. Both older age and lower functional status had negative effects on the decision. All other predictors including the Acute Physiological score from the APACHE II did not have a significant and independent effect on the decision.
The number of ventilations that ended in in-hospital mortality among the ICU ventilations was 150 (53.0%) compared to 202 (68.2%) of the non-ICU ventilations (P < 0.001). Table 4 presents the results of the multivariate analysis for all ventilated elderly patients with in-hospital mortality as the dependent variable. The predictors in this analysis were those described for the previous multi- variate analysis with the addition of ICU ventilation. Only the five predictors listed in the table had a significant and independent effect on in-hospital mortality in the total population. In this case the two most influential factors were conditions that led to ventilation, respiratory insufficiency secondary to sepsis as a positive predictor and pulmonary edema as a negative one. Other independent and significant predictors of in-hospital mortality were more chronic co-morbid conditions assessed by a higher Charlson score, greater physiological impairment assessed by the Acute Physiological score from the APACHE II, and older age. Conspicuously absent from this list was ICU ventilation and pre-hospitalization functional status, which were included in the analysis but were not found to have an independent and significant effect on in-hospital mortality.
Discussion
This paper focuses on the population of elderly patients who required mechanical ventilation, which in most cases was conducted in a non-ICU setting. This practice is very common in Israel, but less so in other countries in the western world. In this unique reality the question arises as to how generalizable the data and findings of this study are to a non-Israeli setting? In this respect it is noteworthy that there are many hospitals in the world in which, for various reasons, not all elderly patients are ventilated in ICUs. The findings of this study are very rel-evant for those settings. Furthermore, the combination of increased life expectancy that causes ageing of the population together with a deterioration in the economic state in the western world could lead, in just a few years, to a state in which the demand for mechanical ventilation for elderly patients in an advanced degree of disability exceeds the supply of expensive ICU beds, making the search for new solutions mandatory. The reality in which our study was conducted would, under those circumstances, be much more relevant and could serve as a model for testing ways of dealing with this problem in many countries in the western world. Indeed, our findings can contribute to the search for ways to reduce costs without making the outcome of ventilated elderly patients worse.
The percentage of ICU ventilations among younger patients (18 to 65 years) reached 84%, in contrast to a corresponding rate of only 49% in the elderly group. In addition, in the elderly age group there was a dramatic decrease in this percentage by age. Moreover, in the multivariate analysis of the various predictors of the decision to hospitalize the ventilated elderly in an ICU or not, age was found to have a significant and independent effect. The recommendation that 'chronological age per se is not a relevant criterion for hospitalization in an ICU' [27] was not substantiated in the present study population.
Several methodological decisions that were taken in the present study clearly affected its results and require dis- cussion. The decision to study only medical ventilations stemmed from the understanding that this type of ventilation is relatively devoid of non-medical administrative issues that sometimes affect the decision to hospitalize post-operative patients in ICUs. Another critical decision that we took was how to define ICU ventilation for this study. We did not think that the option to define such patients as anyone who was ventilated only in an ICU would be appropriate in light of the high rate of patient transfers from ICUs to wards and vice versa while they are still being ventilated. Under these circumstances we decided to define ICU ventilation as one in which a patient was ventilated in an ICU for a significant and/or a critical portion of the ventilation period. In light of this definition we defined three parameters, any of which would qualify the ventilation as ICU ventilation for purposes of this study. Another problematic issue was how to relate to patients who were ventilated in wards but were not presented at any time to an ICU staff. In each of these 124 ventilations the background for not presenting the patient to an ICU consultant was the strong feeling of the treating physician that the patient was not suited for an ICU or that the request would be turned down by an ICU consultant. In light of this we decided not to separate these ventilations from those in which the patients were presented to an ICU and were rejected and considered all of them as non-ICU ventilations. In this study we looked at the course of ventilation in elderly patients at two points in time only. One was at the beginning of ventilation when we related to any data that could be collected up to that time. The second point of time was at the end of hospitalization when we related to in-hospital mortality. Relating exclusively to these two points of time was essential so that we could, on the one hand, manage the study objectives, while, on the other, not make the study too cumbersome.
In light of this strategy we purposely ignored the course of ventilation and its complications. For the same reason we also related to repeat ventilations in the same hospitalization as one prolonged ventilation.
The ICU gatekeeper who has to conduct triage and decide who should and who should not be admitted to an ICU is not equipped with well-defined guidelines for this task. The decision as to whether or not to admit a ventilated patient to an ICU should be reached on the basis of clinical and ethical considerations and in accordance with available space in an ICU at the given time. These considerations are very poorly defined for elderly patients and give the decision maker broad latitude. Thus, the appropriate method to identify the basis for the triage decision is to analyze its results. The univariate analyses of the various variables between the ICU and non-ICU ventilations identified significant differences between these two subgroups in terms of a broad range of characteristics. From among these predictors the multivariate analyses filtered out only two that had a significant and independent effect on the decision to hospitalize the ventilated patient in an ICU. These two influential factors were age, which was discussed above and was also found in a previous study [28], and the pre-hospitalization functional status of the patient. Despite the ethical problems relating to this issue, in practice the triage staff looked at higher age and poor functional status as negative factors in the decision to hospitalize the patient in an ICU. Among the variables that did not pass this filtering process the Acute Physiological score points component of the APACHE II score is noteworthy. This reflects a lack of significant consideration of the severity of the elderly patient's condition at the initiation of ventilation among the factors that influenced the decision to hospitalize in an ICU.
The primary importance of the list of variables that affect in-hospital mortality of elderly ventilated patients lies in the two variables that did not affect mortality. The first variable is the baseline functional status of the elderly patient. The explanation for the finding that this variable did not affect in-hospital mortality of elderly ventilated patients is that patients with a low functional status usually also have the characteristics that were found in this analysis to significantly and independently affect mortality, in particular very advanced age, a higher Charlson score, and a greater propensity for sepsis. When these factors are controlled, functional status does not have a significant independent effect on mortality. The second variable, ICU ventilation, did not have a significant independent effect on in-hospital mortality even though it was included in the analyses. One ramification of this finding is that the significantly low rate of in-hospital mortality among ICU ventilations compared to non-ICU ventilations in this study stemmed from the different characteristics of the patients in these two sub-groups and not from hospitalization in an ICU, per se. The other significance of this finding requires extra caution. The elderly ventilated population in this study underwent selection into two sub-groups on the basis of actual decisions as to where to hospitalize them. In this population and in accordance with this selection process in-hospital mortality was not affected by ICU ventilations as defined for the study. Despite this finding, it should not be inferred under any circumstances that hospitalization in an ICU does not contribute to the reduction of in-hospital mortality in other populations, using other triage methods and with other definitions of ICU ventilations. Another important aspect of the list of variables that affect in-hospital mortality is in its comparison to the list of factors that affect the decision to hospitalize elderly patients in an ICU. Although age is included in both lists, the other variables are included in only one of them. If survival at the end of the hospital period were the only or primary index for the success of ventilation in the study population, it would be reasonable to expect a greater similarity between the two lists. The striking difference between the two lists reflects, in our opinion, the approach that in elderly ventilated populations, in-hospital mortality is not the only measure and apparently is not even the most important measure of success. Because we feel that this issue of the most appropriate measure of success in the population of ventilated elderly patients is of utmost importance we analyzed it on the same cohort from the perspective of one year after discharge from the hospital. This analysis was published in a separate paper that was dedicated to this issue [29].
Conclusions
In Israel, most elderly patients are ventilated outside ICUs and the percentage of ICU ventilations decreases as age increases. In our study groups, the lower mortality among elderly patients ventilated in ICUs is related to patient characteristics and not to their treatment in ICUs per se. Although the milieu in which this study was conducted is uncommon today in the western world its findings point to possible means of managing future situations in which the demand for mechanical ventilation of elderly patients exceeds the supply of intensive care beds. Moreover, the findings of this study can contribute to the search for ways to reduce costs without having a negative effect on the outcome in ventilated elderly patients.
Key messages
• In Israel, most elderly patients are ventilated outside ICUs.
• In Israel, the percentage of ICU ventilations decreases as age increases.
• The lower mortality among elderly patients ventilated in ICUs is related to patient characteristics and not to their treatment in ICUs per se.
• The findings of this study can contribute to the search for ways to reduce costs without having a negative effect on the outcome in ventilated elderly patients. | 2016-05-12T22:15:10.714Z | 2010-03-30T00:00:00.000 | {
"year": 2010,
"sha1": "c49d464d22896dbcda5abbb4c3f882d8b75da80b",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc8935",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c49d464d22896dbcda5abbb4c3f882d8b75da80b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13660131 | pes2o/s2orc | v3-fos-license | Mouse genetic background influences whether HrasG12V expression plus Cdkn2a knockdown causes angiosarcoma or undifferentiated pleomorphic sarcoma
Soft tissue sarcomas are rare mesenchymal tumours accounting for 1% of adult malignancies and are fatal in approximately one third of patients. Two of the most aggressive and lethal forms of soft tissue sarcomas are angiosarcomas and undifferentiated pleomorphic sarcomas (UPS). To examine sarcoma-relevant molecular pathways, we employed a lentiviral gene regulatory system to attempt to generate in vivo models that reflect common molecular alterations of human angiosarcoma and UPS. Mice were intraveneously injected with MuLE lentiviruses expressing combinations of shRNA against Cdkn2a, Trp53, Tsc2 and Pten with or without expression of HrasG12V, PIK3CAH1047R or Myc. The systemic injection of an ecotropic lentivirus expressing oncogenic HrasG12V together with the knockdown of Cdkn2a or Trp53 was sufficient to initiate angiosarcoma and/or UPS development, providing a flexible system to generate autochthonous mouse models of these diseases. Unexpectedly, different mouse strains developed different types of sarcoma in response to identical genetic drivers, implicating genetic background as a contributor to the genesis and spectrum of sarcomas.
INTRODUCTION
Soft tissue sarcomas are rare mesenchymal malignancies that account for approximately 1% of all cancers. The WHO has defined over 100 different soft tissue sarcoma subtypes named after the tissue that they most closely resemble [1]. Based on molecular characteristics, soft tissue sarcomas can be divided in two broad categories: sarcomas with simple karyotypes, such as chromosomal translocations, and sarcomas with more complex genetic profiles, including TP53 mutation, CDKN2A deletion and MDM2 amplification [2][3][4].
Undifferentiated pleomorphic sarcomas (UPS), previously referred to as malignant fibrous histiocytomas (MFH), account for approximately 5% of adult soft tissue sarcomas and represent one of the most common types of high-grade soft tissue sarcoma. Standard treatment options are surgical resection, radiotherapy, and chemotherapy, which in many cases are not curative, highlighting the necessity to develop novel targeted treatments. It is not clear whether Research Paper www.oncotarget.com UPS represents a group of de-differentiated sarcomas that share a common morphology but which originated from different cell types or if all UPS tumours arise from an asyet-unidentified common cell of origin [5]. The genetic alterations responsible for the development of UPS are also incompletely understood. TP53 alterations have been identified in 17% of human UPS [2] and CDKN2A loss seems to be an alternative to TP53 deletion [3]. HRAS and KRAS mutations have been identified in up to 50% of human UPS tumours [6][7][8]. Mouse studies have confirmed that the cooperation of oncogenic Kras and Trp53 or Cdkn2a deficiency resulted in the development of undifferentiated pleomorphic sarcomas in different tissues [9][10][11][12].
Another clinically aggressive subtype of highgrade soft tissue sarcoma is angiosarcoma. These tumours represent rare malignancies of endothelial differentiation that account for approximately 1% of all soft tissue sarcomas. Angiosarcomas show a wide anatomic distribution and arise spontaneously or secondarily to radiation, toxic chemicals (e.g. vinyl chloride) or chronic lymphoedema (Stewart-Treves syndrome). Treatment options are limited and the prognosis is poor [13]. Genetic mutations and amplifications of VEGF, MDM2, TP53, CDKN2A, KRAS and MYC have been described in angiosarcoma patients [14][15][16][17]. MYC gene amplifications are commonly found in radiation-induced angiosarcomas [18]. A recent publication reported that the majority of genetic alterations were found in the p53 and MAPK pathways. TP53 was mutated in 35% of the lesions and CDKN2A lost in 26%. 53% of angiosarcomas displayed MAPK pathway activation, and harboured genetic activating mutations in KRAS, HRAS, NRAS, BRAF, MAPK1 or inactivating mutations in NF1 and PTPRB1 [19,20]. Several in vivo mouse studies showed the involvement of loss of function of the p53 tumour suppressor in angiosarcoma development [21][22][23]. In addition, the in vivo deletion of Cdkn2a in mice lead to the development of lesions which recapitulate human angiosarcoma, however, only 30% of the mice displayed angiosarcomas within 100 days [24]. Furthermore, alterations in the PI3K/AKT/mTOR pathway have been identified in a small percentage of patients [19,25,26] and deletion of Tsc1, a tumour suppressor that negatively regulates the pathway, induced the formation of hemangiosarcomas in mice [27]. Another report showed that the in vivo deletion of Notch1 resulted in the development of hepatic angiosarcomas with a penetrance of 86% at 50 weeks after gene deletion [28], although genetic alterations in the Notch pathway have not been reported in human angiosarcomas. Although these studies have been helpful in uncovering aspects of sarcomagenesis, there is limited understanding of the interactions between cooperating genetic alterations.
In this study we employed a mouse genetic approach using the MuLE lentiviral gene regulatory system [10] to functionally test the contributions of different candidate driver oncogenes and tumour suppressor genes to the formation of angiosarcoma and UPS. Different mouse strains were injected intraveneously with ecotropic MuLE lentiviruses expressing combinations of shRNA against Cdkn2a, Trp53, Tsc2 and Pten with or without expression of Hras G12V , PIK3CA H1047R or Myc. Tumour development was monitored by in vivo imaging. We successfully generated new models of angiosarcoma and of UPS based on oncogenic Hras G12V expression in combination with knockdown of Cdkn2a or Trp53. Unexpectedly, different mouse strains developed different types of sarcoma in response to identical genetic drivers.
Expression of oncogenic Hras G12V plus knockdown of Cdkn2a causes angiosarcoma development in SCID/beige mice
To functionally test the contributions of different candidate driver oncogenes and tumour suppressor genes to the formation of angiosarcoma, we generated a panel of lentiviral vectors based on the MuLE system [10] (Supplementary Figure 1A), to induce genetic alterations that reflect some of the most commonly found alterations of human angiosarcomas. We first utilised these ecotropic MuLE lentiviruses expressing combinations of shRNA or shRNA-miR30 against Cdkn2a, Trp53, Tsc2 and Pten with or without expression of oncogenic Hras G12V , oncogenic PIK3CA H1047R or Myc vectors to attempt to generate panels of genetically-engineered angiosarcoma cell lines by infecting a disease-relevant cell type, namely primary murine endothelial cells from the spleen (pMSECs). Western blotting and real time PCR assays of puromycinselected cultured cells infected with these vectors verified that they effectively induced the desired changes in gene expression (Supplementary Figure 1B-1E). Consistent with an oncogenic activity of these genetic changes, all cell lines, with the exception of Hras G12V expression alone and Hras G12V expression plus shTrp53, exhibited increased rates of proliferation (Supplementary Figure 2A). The absence of increased proliferation induced by oncogenic Hras G12V or oncogenic Hras G12V plus Trp53 knockdown is likely to be mediated by the upregulation of p16INK4A protein expression observed in pMSEC cells infected with these vectors (Supplementary Figure 1B) as removal of this putative proliferative barrier by knockdown of Cdkn2a increased cellular proliferation (Supplementary Figure 2A).
To further investigate potential transformed cellular behaviour we cultured cells of all genotypes on low-attachment plates, however none of the genetic combinations allowed the proliferation of cells as spheres or masses (data not shown), indicating that they do not have anchorage-independent proliferation capacity. We next asked whether these cellular systems might represent experimentally tractable allograft tumour models by www.oncotarget.com injecting wild type pMSECs or shCdkn2a plus Hras G12V pMSECs subcutaneously into SCID/beige immunodeficient mice. Within one month of injection, both the wild type and shCdkn2a plus Hras G12V cells formed blood-filled lesions (Supplementary Figure 2B) that were lined with CD31positive endothelial cells with atypical nuclei growing either as single layers or in papillary projections (Supplementary Figure 2C). The injected cells apparently have the capacity to co-opt or integrate into local blood vessels, resulting in large blood-filled vascular structures. Based on the fact that this phenotype also arose following injection of wild type pMSECs in the absence of oncogene activation or tumour suppressor inactivation, we conclude that this ex vivo engineered cellular system does not represent a good allograft model system for angiosarcoma.
We next sought to assess the tumour forming capacity of the same genetic changes that were tested in the experiments above directly in vivo. 4-6-week-old SCID/beige mice were intravenously injected via the tail vein with concentrated ecotropic MuLE vectors that carried an expression element for firefly luciferase ( Figure 1A) in order to label infected cells and to trace potential tumour development in vivo over time. The intravenous injection of an ecotropic lentivirus expressing oncogenic Hras G12V together with knockdown of Cdkn2a (n = 21 of 32 injected mice) or Trp53 (n = 2 of 4 injected mice) in SCID/beige mice induced increases in luciferase signals over 4-8 weeks ( Figure 1B and Supplementary Figure 3A). These signals were widely distributed in different organs throughout the body. One of three mice injected with a vector expressing only oncogenic Hras G12V developed signals in the brain approximately 6 months after injection. None of the other viruses was sufficient to cause any large increases in luciferase signal within 6 months of injection, demonstrating that these combinations of genetic alterations are not oncogenic in this setting.
Dissections of mice revealed that the increased luciferase signals corresponded to the presence of bloodyappearing tumours in different organs ( Figure 1C and 1D and Supplementary Figure 3). From 32 mice injected with shCdkn2a plus Hras G12V MuLE vectors, 21 mice developed a total of 24 tumours in various tissues including testicle (n = 9, 38%), brain (n = 7, 30%), spleen (n = 2, 8%), uterus (n = 2, 8%), ovary (n = 1, 4%), lung (n = 1, 4%), colon (n = 1, 4%) and eye (n = 1, 4%). Histological analysis of these tumours revealed poorly demarcated malignant neoplasms with hemorrhage and irregular, anastomosing vascular channels. Endothelial lining showed multilayering and intraluminal tufting with nuclear atypia, hyperchromasia, enlargement and irregularity (Supplementary Figure 3A, arrowheads). Mitotic activity was variable and back-to-back vascular channels appeared sieve-like. Atypical cells were either spindled, epitheloid or mixed. There was more solid growth in more poorly differentiated areas. Intraluminal erythrocytes were a common feature as well as large areas filled with red blood cells. Tumour cells exhibited high levels of expression of H-RAS in comparison to surrounding normal tissue, indicating that the MuLE virus is functional in vivo (Supplementary Figure 3B). Positive immunoreactivity to antibodies against the endothelial cell marker proteins CD31 and von Willebrand Factor (vWF) confirmed the endothelial differentiation of the tumour cells ( Figure 2A). The lesions showed variable staining for VIMENTIN, DESMIN and SMOOTH MUSCLE ACTIN (SMA), ranging from an absence of staining to some tumours showing strong positivity ( Figure 2A). None of the tumours exhibited nuclear staining for the skeletal muscle markers MYOD1 and MYOGENIN ( Figure 2A). Supplementary Figure 4 shows positive and negative control stainings for the different antibodies employed throughout this study. Tumours that arose in Trp53 knockdown plus oncogenic Hras G12V injected mice displayed an identical histological appearance and immunohistochemical staining profile to tumours in Cdkn2a knockdown plus oncogenic Hras G12V injected mice ( Figure 2B). In summary, these histological and molecular features are consistent with a diagnosis of angiosarcoma. Indeed, analyses of three human angiosarcomas revealed that they have a simlar histological appearance and exhibit a similar pattern of immunoreactivity to the mouse tumours ( Figure 2C). We conclude that we have developed a rapid autochthonous mouse model of angiosarcoma that is trackable via live animal imaging and that reflects the frequent genetic alterations that arise in human angiosarcoma tumours.
Different immunocompetent mouse strains display different types of soft tissue sarcomas in response to expression of oncogenic Hras G12V plus loss of Cdkn2a function
To more accurately model the complexities of tumour development in humans it would be desirable to have tumour models that arise in immunocompetent mice. We therefore investigated whether similar tumours arose in response to intravenous injection of shCdkn2a plus Hras G12V MuLE vectors in the Fox Chase CB17, 129/Sv and C57BL/6 mouse strains.
Fox Chase CB17 mice carry the immunoglobulin heavy chain allele from C57BL/Ka mice on a BALB/c background. They serve as an ideal control for SCID/beige mice as they represent the identical genetic background but have a normal immune system. Within 4 weeks, 7 of 8 (88%) Fox Chase CB17 mice injected intraveneously with shCdkn2a plus Hras G12V MuLE vectors developed angiosarcomas with comparable growth kinetics and histology to those that arose in SCID/beige mice (Supplementary Figure 5A). The anatomic distribution of the tumours in Fox Chase CB17 mice was similar to the tumour distribution seen in SCID/beige mice and included brain (n = 4, 50%), testicle (n = 3, 38%) and spleen (n = 1, 12%) (Supplementary Figure 5B). The tumours presented as bloody lesions and stained positively for CD31 and vWF by immunohistochemistry. Like the tumours in SCID/beige mice, these tumours also showed variable staining for VIMENTIN, DESMIN and SMA as well as an absence of staining for MYOD1 and MYOGENIN (Supplementary Figure 5C). These experiments demonstrate that a competent immune system does not affect tumour formation and provide a new autochthonous angiosarcoma model in an immunocompetent background.
To investigate sarcoma formation in mouse backgrounds that are more commonly utilised for biomedical research we next employed 129/Sv mice. Intraveneous injections of shCdkn2a plus Hras G12V MuLE viruses in 129/Sv mice (n = 8) caused a strong luciferase signal increase and the development of multiple tumours with 100% penetrance within 4 weeks ( Figure 3A and 3B, Supplementary Figure 6A). Bloody-appearing tumours were observed in testicles of 75% of male mice (n = 3) and in uteri (n = 2) and ovaries (n = 2) of 50% of female mice. 25% of mice carried lesions in the spleen (n = 2), and 13% in lung (n = 1) and brain (n = 1), similar to results in SCID/beige mice. However, all female mice (n = 4) additionally developed subcutaneous tumours (n = 5) that were located in the head and neck region and close to the subcutis of the vulva. These tumours were solid and white in appearance ( Figure 3C, last column). Neither the sub-cutaneous location, nor the gross morphological appearance, were ever seen in tumours in SCID/beige or Fox Chase CB17 mice. While all of the bloody-appearing tumours exhibited an identical histological appearance and pattern of immunoreactivity similar to tumours that arose in SCID/beige mice ( Figure 3C, Supplementary Figure 6B), classifying them as angiosarcomas, the subcutaneous tumours exhibited a completely different histology and immunohistochemical staining profile. These tumours contained cells with rhabdoid features; i.e., large polygonal cells with gigantic bizarre nuclei ( Figure 4A; arrowheads), abundant, deeply eosinophilic cytoplasm in a tadpole-or racquet shape and growing in a storiform pattern. Nuclei displayed high mitotic indices, irregular nuclear membranes, and eosinophilic cytoplasmic inclusions. These tumors showed necrotic regions, acute inflammatory responses and were highly invasive, infiltrating surrounding tissues including muscle and fat ( Figure 4A, arrows). The absence of apparent features of any definable cell lineage is suggestive of a diagnosis of undifferentiated pleomorphic sarcoma (UPS). Indeed, while these tumours showed high levels of H-RAS expression in comparison to adjacent normal tissue ( Figure 4C, 4D) and were immunoreactive for the common mesenchymal marker VIMENTIN, the tumour cells did not stain for lineage markers including CD31, vWF, DESMIN, SMA, MYOD1 or MYOGENIN ( Figure 3C, last column). Given the subcutaneous location of these tumours, we further investigated whether the tumours might potentially represent sarcomatoid variants of malignancies derived from a skin cell. Tumours in 129/Sv mice stained negatively for the melanoma marker protein PMEL (the HMB45 antigen) ( Figure 4E). Melanomas, as well as other types of neural lineage-derived tumours are typically diffusely positive for S100 and PMEL. The majority of tumour cells in tumours in 129/Sv mice were negative for S100 but some scattered cells within the tumour displayed positive immunoreactivity for S100 ( Figure 4F). Since macrophages also stain positively for S100, it is likely that this staining is due to the presence of inflammatory cells. Tumour cells in human sarcomatoid squamous cell carcinomas typically exhibit nuclear immunoreactivity for p63 [29]. While normal skin showed strong nuclear p63 immunoreactivity as an internal positive control, only a small number of scattered cells in tumours in 129/Sv mice displayed weak cytoplasmic staining for p63 ( Figure 4G), arguing against a diagnosis of sarcomatoid squamous cell carcinoma. Tumours were also completely negative for the epithelial marker EpCAM ( Figure 4H), but scattered cells within the 129/ Sv subcutaneous tumours ( Figure 4I), as well as rare cells in angiosarcomas in SCID/beige mice ( Figure 4J), reacted with a pan-CYTOKERATIN antibody, another epithelial marker. However, since human sarcomas, including UPS and angiosarcoma, can contain cells that are immunoreactive for pan-CYTOKERATIN caution should be taken in interpreting this staining as providing strong evidence of epithelial origin [30]. Positive control stainings of all of these antibodies are shown in Supplementary Figure 4.
Given the absence of morphologic features of any cellular lineage, the absence of clear evidence for positivity of tumour cells for a panel of lineage markers and absence of strong and diffuse pan-CYTOKERATIN staining we therefore favour the diagnosis of these tumours as high-grade UPS Not Otherwise Specified in keeping with WHO diagnostic guidelines. However, since the differential diagnosis of UPS versus sarcomatoid carcinoma is necessarily a diagnosis of exclusion, it remains formally possible that these tumours might represent an unknown type of undifferentiated sarcomatoid carcinoma arising from a cell type that we have not been able to identify. As already known from human tumour specimens, the differential diagnosis of these lesions remains a continuous matter of debate. Importantly, these analyses show that 129/Sv mice develop two different types of tumours, angiosarcomas and UPS, in response to the same oncogenic stimulus.
To further investigate the effect of genetic background on tumour formation, we injected C57BL/6 mice with shCdkn2a plus Hras G12V MuLE lentiviruses. Within 4-8 weeks of injection both male and female C57BL/6 mice showed large increases in luciferase signal and developed subcutaneous lesions with 92% penetrance (12 of 13 injected mice) ( Figure 5A and 5B). Tumours (n = 12) that developed in C57BL/6 mice were solid and white in appearance like those seen in female 129/ Sv mice. They were subcutaneous and located either in the head and neck area, lower leg or at the junction of the tail and spine. These tumours exhbited an identical histological appearance to the UPS tumours that arose in 129/Sv mice ( Figure 4B). These tumours similarly showed an absence of staining for all of the markers that were used to characterise the tumours in 129Sv mice, except VIMENTIN ( Figure 5C). Based on these results we conclude that shCdkn2a plus Hras G12V MuLE viruses solely cause high-grade UPS in C57BL/6 mice.
DISCUSSION
Major hurdles in studying sarcoma pathologies are the relative rarities of the human diseases and the absence for many sarcoma subtypes of good pre-clinical models. Here, we used the MuLE lentiviral gene regulatory system [10] to investigate the molecular genetics underlying the pathogenesis of two types of soft tissue sarcomas, namely angiosarcoma and UPS. The MuLE system allows the direct introduction of multiple genetic alterations in somatic cells in vivo by lentiviral injection. Bypassing germline transgenic approaches has benefits in terms of time and costs and offers flexibility in terms of the strains of mice that can be used for the experiments, allowing comparisons between different genetic backgrounds. Guided by the genetics of human angiosarcomas and UPS www.oncotarget.com tumours, we functionally tested the contributions of the candidate sarcoma tumour suppressors Cdkn2a, Trp53, Tsc2 and Pten and the candidate oncogenes Hras G12V , PIK3CA H1047R or Myc. We discovered that the systemic injection of ecotropic lentiviruses expressing oncogenic Hras G12V together with the knockdown of Cdkn2a or Trp53 was sufficient to initiate angiosarcoma formation in multiple organs in SCID/beige mice. shCdkn2a plus Hras G12V MuLE viruses also induced angiosarcoma formation in Fox Chase CB17 and 129/Sv mice, but surprisingly additionally caused UPS development in 129/ Sv mice and only UPS development in C57/BL6 mice. These observations are consistent with the fact that RAS-MAPK and p53 pathway alterations are frequently found in high-grade soft tissue sarcomas, such as angiosarcomas and UPS [2,3,6,7,19].
Our experimental approach provides new models of at least some of the genetic subsets of human angiosarcomas and UPS tumours. It should also be noted that while histological and immunohistochemical analyses revealed numerous similarities between mouse and human tumours, there are differences in terms of the anatomical sites where the mouse model tumours arise and the most frequent sites where naturally occurring human counterpart tumours arise. Mouse angiosarcomas in our model arose most frequently in the brain, genital organs and spleen. Human angiosarcomas are most frequently found in the skin (60%), deep soft tissue (25%), breast (8%) and more rarely in liver and spleen [1]. There are also examples of cases of human angiosarcomas arising in the testes [31], uterus and ovaries [32], brain [33], lungs [34] and colon [35], which are all sites at which tumours were observed in our mouse models. Since endothelial cells are present in all tissues in the body, it appears that angiosarcomas have the potential to arise in any organ in humans. In our mouse model, the anatomical distribution of tumours is likely to be influenced by the method of intravascular injection, which will presumably influence the likelihood of viral infection at different sites in the animal. In contrast, UPS tumours in our models were always localised subcutaneously. In humans UPS can arise anywhere in the body but most frequently occurs in lower extremeties and sometimes in the retroperitoneum, head and neck and breast. These tumours typically arise in deep subfascial tissue, but about 10% of UPS are primarily subcutaneous [1], similarly to the tumours that arise in our mouse models.
Genetic cooperation between RAS pathway activation and loss of Cdkn2a tumour suppressor function has been shown in other mouse models of sarcomas. Intramuscular injection of Adeno-Cre in the leg of loxP-STOP-loxP-Kras G12D/+ ;Cdkn2a fl/fl mice caused UPS development [9,11] and we have previously shown that intramuscular injection of the same shCdkn2a plus Hras G12V or shTrp53 plus Hras G12V MuLE lentiviruses that were used in this study caused the developement of highgrade UPS [10]. It is noteworthy that injection of these viruses into skeletal muscle in SCID-beige mice caused UPS [10] but intravenous injection caused angiosarcoma, whereas in C57/BL6 mice both modes of injection caused UPS formation [10]. Our data demonstrate that the same combination of genetic drivers can cause different types of tumours based not only on the site of viral delivery, likely due to infection of different cell types, but also based on the genetic background of the mouse strain.
Why does intraveneous, lentiviral-mediated delivery of the same genetic alterations cause different tumours in different strains of mice? One possibility relates to genetic modifiers such as allelic variants, sequence differences, epigenetic modifications and gene expression levels that could potentially influence tumour phenotypes [36]. There is precedent for the fact that identical genetic alterations can result in different phenotypes in different mouse strains. One example is that the incidence of mammary tumours varies among strains heterozygous for Trp53, with C57BL/6 mice being resistant and BALB/c mice being susceptible [37,38]. Indeed, BALB/c mice carry an allelic variant of Cdkn2a that leads to compromised p16 activity, likely causing an increased susceptibility to develop certain tumours [39,40]. In the context of the results of the present study, it might be possible in future long-term and largescale studies to utilise multi-generational interbreeding strategies between C57BL/6 and Fox Chase CB17 strains, coupled with genomic analyses, to narrow down loci that contribute to the type of sarcoma that develops in response to oncogenic Hras G12V expression and Cdkn2a knockdown. By extension, our observations highlight the fact that also in humans the genetic background of every individual may potentially influence the outcome of oncogenic mutations in terms of what type of sarcoma develops. In this context, it is noteworthy that a study of UPS tumours in Korean patients identified a high frequency of oncogenic KRAS and HRAS mutations [6], but a study by the same authors of UPS tumours in American patients revealed that these tumours lacked KRAS and HRAS mutations [7], arguing that different human genetic backgrounds (or environmental factors) may select for different oncogenic mutations during the course of sarcoma development. To further investigate this issue, we analysed the mutation spectrum of a series of 48 human UPS tumours from the TCGA provisional dataset using cBioPortal software (http://www.cbioportal.org/ study?id=sarc_tcga#summary) (Supplementary Figure 7). The ethnicity of this patient cohort is 42 white, 4 black or African American, 2 hispanic or latino, 2 unknown but no reported Asian patients. Interestingly, this set of UPS tumours does not display activating point mutations in any of the RAS-family genes in keeping with the previous study of American patients [7]. However, the majority of these tumours exhibit loss-of-function point mutations or copy number deletions of the TP53 and CDKN2A tumour suppressor genes and all but one of these tumours exhibit multiple copy number gains or amplifications of genes involved in the RAS-RAF-MEK-MAPK signalling cascade, as well as copy number losses of the NF1 tumour suppressor gene that negatively regulates signalling by this cascade. These results give rise to a hypothesis that could be tested in future studies, namely that chromosomal copy number alterations rather than point mutations account for activation of the RAS-signalling pathway in combination with loss of the CDKN2A-TP53 tumour suppressor pathways to drive UPS tumour formation in non-Asian individuals. In summary, we believe that our functional tumour modelling studies, combined with the above-described genetic analyses, further emphasise the need to consider individual tumours at the molecular level rather than at the level of histo-pathological appearance when thinking about the development and clinical application of new molecularly targeted sarcoma therapies.
A second possible explanation for the different tumour types that arise in the different mouse stains relates to cellular tropism. It remains possible that MuLE viruses might infect different spectra of cells in different mouse strains, potentially due to different expression levels of the mouse cationic amino acid transporter 1 (mCAT1) protein that serves as the virus receptor. However, to our knowledge, nothing is known about the expression patterns of this protein in different strains of mice and in our experience identifying infected cells in vivo is only possible using genetic reporter mouse lines [10]. However, the identification of infection of a particular cell type is at best a hint that it might be able to be transformed and act as the cell of origin of a tumour. Consistent with the idea that angiosarcoma and UPS tumours might arise due to the infection of different cell types it is noteworthy that all of the UPS tumours that arose in 129/Sv and C57BL/6 mice were subcutaneous, whereas angiosargomas in 129/ Sv, Fox Chase CB17 and SCID/beige mice were found in several different organs. These differences in anatomical distribution are suggestive of different cells of origin of these tumours. Proof of cell of orgin of a tumour requires a lineage tracing experiment. Given the fact that angiosarcomas express markers of endothelial cells and are intimately connected to the normal vascular network of the mouse (blood-filled lesions) it appears likely that the cell of origin of these tumours are vascular endothelial cells. Future experiments could involve utilising lineage tracing experiments using a Cre-driver such as Tie2-Cre ERT2 [41] to genetically label endothelial cells prior to injection of shCdkn2a plus Hras G12V MuLE lentiviruses. A potential caveat of these studies may however be that the tumour type that will arise will also be dictated by the (likely mixed) genetic background of the strain of the lineage tracing mouse. The cell of origin of UPS remains unknown and it is not clear whether UPS represent a group of de-differentiated sarcomas which share a common morphology but originated from different cell types or if all UPS tumours arise from a common cell of origin [5]. One interesting hypothesis that could be experimentally tested is that adult mesenchymal stem cells (MSCs) have been proposed as the cell of origin of human UPS [42][43][44][45]. It has been shown that the overexpression of oncogenic Kras in cultured MSCs isolated from the bone marrow of Trp53 knockout C57BL/6 mice caused transformation of these cells in vitro [46]. We have shown that ecotropic MuLE lentiviruses can infect a variety of cultured primary cells, including mouse embryonic stem cells [10] and although bone marrow represents the main source of MSCs, MSCs also exist in low numbers in peripheral blood [47]. It is theoretically possible that infected MSCs in the peripheral blood could give rise to UPS in 129/Sv and C57BL/6 mice.
In summary, these new experimental models will facilitate future pre-clinical studies for establishing new therapeutic interventions for these aggressive malignancies. While it is somewhat surprising that the other tested candidate tumour suppressors and oncogenes were not sufficient to cause tumour formation, given their frequent mutational alteration in human angiosarcomas and UPS, the flexible nature of the MuLE system should also allow the testing of other candidate oncogenes, modifier genes or tumour suppressor genes that will likely continue to emerge from ongoing genomic studies of these rare tumours.
Generation of MuLE vectors
The majority of the MuLE Entry and Destination vectors used in this study were previously described [10] and were recombined using MultiSite Gateway LR 2-fragment recombinations to generate the final viral constructs. New MuLE Entry vectors carrying 7SK promotor-driven expression of shRNA against Tsc2 (Sigma, TRCN0000306244) and CMV promotor-driven expression of hemagglutinin (HA) tagged phosphoinositide 3-kinase H1047R (HA-PIK3CA H1047R, Addgene 12524, deposited by Dr. Jean Zhao) were generated. Ecotropic lentiviral vectors were produced by using calcium phosphatemediated transfection of HEK293T cells and the viral preparation was concentrated as described previously [10].
Culture, infection and assays of pMSECs
C57BL/6 mouse primary spleen endothelial cells (pMSECs, C57-6057, Cell Biologics) were cultured according to the instructions of the supplier in complete mouse endothelial cell medium (M1168, Cell Biologics) in a humidified 5% (v/v) CO 2 incubator at 37° C. 50% confluent pMSECs were transduced with lentiviral vectors in the presence of 4 μg/ml polybrene (Hexadimethrine bromide, 107689, Sigma-Aldrich). Puromycin (ANT-PR-1, InvivoGen) was added to cultures 48 h after infection and drug selection continued until all control cells were dead. pMSECs were seeded in triplicates at density of 2,000 cells per well in 96-well plates and analysed after 1, 3, 5 and 7 days using the SRB assay [48]. For allograft assays, 1 × 10 6 pMSECs were injected subcutaneously into SCID/beige mice.
In vivo tumour formation assays
Concentrated ecotropic lentiviruses were injected (3 × 10 5 infectious viral particles in a volume of 10 ml/kg) using a 30G insulin syringe into the lateral tail vein of 4-6-week old mice. Noninvasive in vivo bioluminescence imaging was performed using the IVIS Spectrum (Perkin Elmer) together with the Living Image software (version 4.4). Mice were anaesthetized by using 2.5% isoflurane. During imaging, the isoflurane levels were reduced to 1.5%. All fluorescence measurements were performed in epi-fluorescence mode. For bioluminescence imaging, mice were injected subcutaneously with 150 mg/kg D-luciferin (Caliper, no. 122796) and imaged 15 min after injection.
Immunohistochemistry
Tumour-bearing organs were resected, fixed in 10% formalin, paraffin-embedded and cut in 5 µm thick sections. Immunohistochemical analysis was performed as described [49]. The antibodies used in this study were anti-
Author contributions
LPB, JA, TH, SP, AFG and AC performed experiments, LPB, JA and IJF analysed data, PJW performed histopathological analyses and LPB and IJF wrote the manuscript with input from all authors. | 2018-05-09T00:43:45.688Z | 2018-04-13T00:00:00.000 | {
"year": 2018,
"sha1": "ba3b31c2c2f784d28e4fbfb6b5600500e706a45f",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/24831/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba3b31c2c2f784d28e4fbfb6b5600500e706a45f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
53022211 | pes2o/s2orc | v3-fos-license | The immunologic constant of rejection classification refines the prognostic value of conventional prognostic signatures in breast cancer
Background The immunologic constant of rejection (ICR) is a broad phenomenon of Th-1 immunity-mediated, tissue-specific destruction. Methods We tested the prognostic value of a 20-gene ICR expression signature in 8766 early breast cancers. Results Thirty-three percent of tumours were ICR1, 29% ICR2, 23% ICR3, and 15% ICR4. In univariate analysis, ICR4 was associated with a 36% reduction in risk of metastatic relapse when compared with ICR1-3 (p = 2.30E–03). In multivariate analysis including notably the three major prognostic signatures (Recurrence score, 70-gene signature, ROR-P), ICR was the strongest predictive variable (p = 9.80E–04). ICR showed no prognostic value in the HR+/HER2− subtype, but prognostic value in the HER2+ and TN subtypes. Furthermore, in each molecular subtype and among the tumours defined as high risk by the three prognostic signatures, ICR4 patients had a 41–75% reduction in risk of relapse as compared with ICR1-3 patients. ICR added significant prognostic information to that provided by the clinico-genomic models in the overall population and in each molecular subtype. ICR4 was independently associated with achievement of pathological complete response to neoadjuvant chemotherapy (p = 2.97E–04). Conclusion ICR signature adds prognostic information to that of current proliferation-based signatures, with which it could be integrated to improve patients’ stratification and guide adjuvant treatment.
BACKGROUND
Despite recent progresses,~15% of patients with breast cancer still develop metastases and die. During the last decades, genomic analysis revealed the extent of the molecular heterogeneity of disease. 1 Based on gene expression profiling, a new molecular classification was defined, confirming that breast cancer is a group of molecularly distinct subtypes associated with different clinical outcome and prognostic features. In parallel, multigene signatures prognostic and/or predictive for response to chemotherapy were developed. 2,3 Several commercially available prognostic classifiers have been cleared by the Food and Drug Administration for clinical use or endorsed by American Society of Clinical Oncology (ASCO), National Comprehensive Cancer Network (NCCN), and Saint-Gallen guidelines to assist clinicians in making decisions about adjuvant chemotherapy, in particular for patients with HR+/HER2− tumour. Indeed, those signatures, mainly based on genes involved in cell proliferation, provide modest prognostic information for patients with classically proliferative HER2+ or triple-negative (TN) tumours.
The role of immunity in counteracting tumour progression is clearly recognised. 4,5 Classically, breast cancer is considered less immunogenic than melanoma or renal cell carcinoma.
Nonetheless, the role of immunity has emerged with the demonstration of a favourable predictive impact of the presence of tumour-infiltrating lymphocytes (TILs) 6 and of gene expression signatures of immune response (IR), notably for TN and HER2+ tumours. 7,8 Given the recent therapeutic success of immune checkpoint inhibitors in several types of cancers, 9,10 these drugs were tested in breast cancer: 11 no or very low activity was observed in HR+ tumours, whereas higher activity was reported in small subsets of heavily pre-treated TN tumours preselected with an increased PD-L1 expression with respective 18.5 and 24% objective response rates with pembrolizumab (n = 27) 12 and atezolizumab (n = 21), 13 and remarkably durable responses.
Recent data suggest that not only the composition of tumourinfiltrating immune cells, but also their functional orientation might serve as a prognostic/predictive marker to select systemic therapies. 5 The functional orientation towards cytotoxic response is observed in tumours undergoing regression following immunotherapy and, in melanoma, has been associated with responsiveness to interleukin-2, adoptive therapy, vaccines, and checkpoint inhibitors. [14][15][16][17][18][19] Although prognostic immune signatures defined in breast cancer differ in term of gene composition, most of them include transcripts underlying a cytotoxic response. [20][21][22] The corresponding pathways are also activated during other forms of immunity-mediated tissue-specific destruction, such as allograft rejection, 23 graft-versus-host disease, 24 and flares of autoimmunity. 25 We defined them as the immunologic constant of rejection (ICR). 5,18 More specifically, the ICR consists in a signature including genes involved in Th-1 signalling interferon (IFNG, TBX21, CD8A/B, IL12B, STAT1, and IRF1), Th-1 chemoattraction (such as the CXCR3 and CCR5 ligands, respectively, CXCL9 and CXCL10, and CCL5), and cytotoxic functions (GNLY, PRF, GZMA, GZMB, and GZMH). Interestingly, the expression of these procytotoxic transcripts in tumours is associated with the counter activation of suppressive mechanisms, such as the expression of IDO1, CTLA4, CD274 (PD-L1), PDCD1 (PD-1), and FOXP3. 26 In a study 27 centred on the TCGA data set, we found that breast cancers can be classified in four classes according to the ICR signature. In such classification, the level of immune antitumour response progressively decreased from ICR4 to ICR1. The ICR4 tumours, characterised by the coordinate activation of the ICR pathways, displayed a prolonged survival as compared with ICR1-3 tumours in univariate analysis.
Here, to further asses its clinico-biological value, we expanded the ICR classification to a set of 8766 non-metastatic, invasive primary breast cancers. We searched for correlations with clinicobiological data, including metastasis-free survival (MFS) and pathological complete response (pCR) to neoadjuvant chemotherapy.
MATERIALS AND METHODS
Breast cancer samples and gene expression profiling Our institutional series included 352 tumour samples from pretreatment invasive primary mammary carcinomas either surgically removed or biopsied. 28 The study was approved by our institutional review board. Each patient had given a written informed consent for research use. Samples had been profiled using Affymetrix U133 Plus 2.0 human microarrays (Santa Clara, CA, USA). We pooled them with 34 public breast cancer data sets comprising both gene expression profiles generated using DNA microarrays and RNA-Seq and clinicopathological annotations. These sets were collected from the National Center for Biotechnology Information (NCBI)/Genbank GEO and ArrayExpress databases, and authors' website (Supplementary Table 1). The final pooled data set included 8766 non-redundant non-metastatic, non-inflammatory, primary, invasive breast cancers.
Gene expression data analysis Before analysis, several steps of data processing were applied. The first step was the normalisation of each set separately. It was done in R using Bioconductor and associated packages; we used quantile normalisation for the available processed data from non-Affymetrix-based sets (Agilent, SweGene, and Illumina), and Robust Multichip Average (RMA) with the non-parametric quantile algorithm for the raw data from the Affymetrix-based sets. In the second step, we mapped the hybridisation probes across the different technological platforms represented as previously reported. 29 When multiple probes mapped to the same GeneID, we retained the most variant probe in a particular data set. We log2 transformed the available TCGA RNA-Seq data that were already normalised.
In order to avoid biases related to trans-institutional immunohistochemistry analyses and thanks to the bimodal distribution of respective mRNA expression levels, the Estrogen Receptor (ER), progesterone receptor (PR), and HER2 statutes (negative/positive) were defined on transcriptional data of ESR1, PGR, and HER2, respectively, as previously described. 30 The molecular subtypes of tumours were defined as HR+/HER2− for ER+ and/or PR+ and HER2 − tumours, HER2+ for HER2+ tumours, and TN for ER−, PR−, and HER2− tumours.
We applied in each data set separately several multigene signatures. First, the ICR classifier based on consensus clustering (CC) analysis of the expression levels of 20 representative immune genes (namely, CCL5, CD274, CD8A, CD8B, CTLA4, CXCL9, CXCL10, FOXP3, GNLY, GZMA, GZMB, GZMH, IDO1, IFNG, IL12B, IRF1, PDCD1, PRF1, STAT1, and TBX21) as previously described. 27 Briefly, the CC analysis was performed in R using the Bioconductor package "ConsensusClusterPlus" 31 setting as input parameters 5000 repetitions, 80% item resampling (pItem), a number of groups (k) fixed to 4 (in order to have all data sets stratified with the same number of classes, 4 being the optimal number of groups for the TCGA cohort, 27 ) and the use of an agglomerative hierarchical clustering with ward criterion (Ward.D2) inner and complete outer linkage. We also applied the three major prognostic multigene classifiers of breast cancer: Recurrence score, 32 70-gene signature, 33 and Risk of Relapse score based on PAM50 subtype and proliferation Risk of Relapse (ROR-P). 2 Other signatures included the metagenes associated with immune cell populations such as T cells, CD8+ T cells and B cells defined by Palmer et al., 34 the transcriptional signatures of 24 different innate and adaptative immune cell subpopulations defined by Bindea et al., 35 the cytolytic activity score, 36 the activation score of IFNα, IFNγ, and tumor necrosis factor (TNFα) immune-related and TP53 biological pathways, 37 and a chromosomal instability signature. 38 We also applied to each data set separately three immune gene signatures reported as prognostic in specific molecular subtypes of breast cancer: the IR signature 22 and the lymphocyte-specific kinase (LCK) signature 20 in ER− breast cancers, the Immune 28-kinase signature 21 in basal/TN breast cancers, and the LCK signature 20 in HER2+ breast cancers. Finally, we calculated the mitogenactivated protein kinase (MAPK)-mut score using MAPK genes upregulated in MAPK2K4 /MAP3K1 mutated vs. wild-type tumours, as listed elsewhere. 27 Statistical analysis Correlations between tumour classes and clinicopathological variables were analysed using the one-way analysis of variance (ANOVA) or the Fisher's exact test when appropriate. MFS was calculated from the date of diagnosis until the date of distant relapse. Follow-up was measured from the date of diagnosis to the date of last news for event-free patients. Survivals were calculated using the Kaplan-Meier method and curves were compared with the log-rank test. Uni-and multivariate prognostic analyses for MFS were done using Cox regression analysis (Wald test). The variables submitted to univariate analyses included patients' age at diagnosis ( ≤ 50 years vs. > 50), pathological type (lobular vs. ductal vs. other), pathological axillary lymph node status (pN: negative vs. positive), pathological tumour size (pT1 vs. pT2 vs. pT3), pathological grade (1 vs. 2 vs. 3), molecular subtypes (HR +/HER2− vs. HER2+ vs. TN), and classifications based on ICR and prognostic multigene signatures. The likelihood ratio (LR) tests were used to assess the prognostic information provided beyond that of a clinical model and other signatures, assuming a χ 2 distribution. Changes in the LR values (LR-ΔX 2 ) measured quantitatively the relative amount of information of one model compared with another. We also analysed the pCR after neoadjuvant chemotherapy, defined as absence of invasive cancer in both breast and axillary lymph nodes. Uni-and multivariate analyses for pCR were done using logistic regression. Variables with a p-value < 0.05 in univariate analyses were tested in multivariate analyses. All statistical tests were two sided at the 5% level of significance. Statistical analysis was done using the survival package (version 2.30) in the R software (version 2.9.1; http://www.cran.r-project.org/). We followed the reporting REcommendations for tumour MARKer prognostic studies (REMARK criteria). 39 The immunologic constant of rejection classification refines the. . . F Bertucci et al.
Breast cancer population and ICR classification
We applied the ICR classification to a series of 8766 pre-treatment cancer samples. Most of the patients were >50 years old and most of the tumours were ductal type, pT1-pT2, pN−, grade 2-3, ER+, HER2− (Supplementary Table 2). Sixty-six percent were HR+/HER2 −, 12% were HER2+, and 22% were TN. ICR classification defined 2874 tumours (33%) as ICR1, 2516 (29%) as ICR2, 2061 (23%) as ICR3, and 1315 (15%) as ICR4, with progressive decrease of the enrichment of the ICR signature from ICR4 to ICR1. The box plot of expression of each ICR gene according to the ICR classes is shown in Supplementary Figure 1.
ICR classification and clinicopathological and biological features
We found correlations between the ICR classes and all tested clinicopathological features (Supplementary Table 3). ICR4 class was associated with age ≤ 50 years, ductal type, less pT1, less pN0, high grade, ER− status, PR− status, and TN subtypes. Interestingly, for all those correlations, a continuum existed from ICR1 to ICR4. The TN subtype was more enriched in ICR4 (28%) than the HER2+ subtype (19%), which was also more enriched than the HR+/ HER2− subtype (10%; p < 1.00E-06).
Correlations also existed with immunity-related factors and prognostic signatures of breast cancer (Supplementary Table 4). We found a positive correlation with the lymphocyte infiltrate scored binary (low vs. high), the percentage of high-score samples increasing with the ICR class (p = 2.09E-04). We found strong positive correlation (p < 1.00E-06) with immune gene expression signatures defined in breast cancer: the metagene scores of T cells, CD8+ T cells, and B cells 34 increased from ICR1 to ICR4, as did the activation score of IFNα, IFNγ, and TNFα pathways 37 ( Fig. 1), and the cytolytic activity score. 36 This immune pattern was confirmed and refined using the 24 Bindea signatures for immune cell subsets, 35 showing a strong enrichment from ICR1 to ICR4 for T cells, cytotoxic T cells, CD8+ T cells, T-helper cells, and Tγδ cells, activated NK CD56 dim cells and neutrophils (p < 1.00E-100; Supplementary Figure 2). Among T-helper cells, the Th-1/Th-2 ratio increased from ICR1 to ICR4, whereas Th-17 enrichment, often associated with unfavourable prognosis, 35,40 decreased. This antitumour activation was also correlated to subsets involved in antigen presentation, such as activated dendritic cells (aDCs), DC, B cells, and macrophages. Mast cells and eosinophils decreased from ICR1 to ICR4. Finally, the percentage of high-risk samples increased from ICR1 to ICR4 (p < 1.00E-06) for the 70-gene signature, 33 the Recurrence score, 32 and the ROR-P score 2 (Fig. 1).
ICR classification and MFS in each molecular subtype In order to further assess the complementarity of ICR classification with other signatures, we repeated the same analysis in each molecular subtype separately (Supplementary Table 6, Figs. 2c-e). In the HER2+ subtype (n = 352), the ICR classification and the LCK signature were associated with MFS in univariate analysis, with a HR for MFS event equal to 0.31 (95% CI, 0.15-0.68; p = 3.14E-03, Wald test) in ICR4 when compared with ICR1-3. In multivariate analysis, only the ICR classification remained significant. In the TN subtype (n = 563), ICR classification displayed strong prognostic value with a HR for MFS event equal to 0.44 (95% CI, 0.28-0.69; p = 3.42E-04, Wald test) in ICR4 when compared with ICR1-3. The other immune signatures (IR, LCK, and 28-kinase) were also significant in univariate analysis, but in multivariate analysis, only the ICR signature kept its prognostic value (p = 1.57E-02, Wald test). Finally, in the HR+/HER2− subtype (n = 2131), the ICR classification was not associated with MFS, whereas most of the clinicopathological variables and all classical prognostic signatures were.
Comparison of ICR classification with other prognostic signatures Such prognostic complementarity between the proliferationbased signatures and our ICR signature was tested using the LR tests (Table 2). In the overall population and in each molecular subtype, significant additional prognostic information was provided by our signature beyond that provided by the clinical model combined to each other signature (70-gene, Recurrence score, and ROR-P score). For example, ICR signature added information to that provided by the combination of clinical model and Recurrence score in the overall population (LR-ΔX 2 = 10.39, p = 1.27E-03), in the TN (LR-ΔX 2 = 15.68, p = 7.52E-05), HER2+ (LR-ΔX 2 = 12.72, p = 3.63E-04), and HR+/HER2− subtypes (LR-ΔX 2 = 4.46, p = 3.46E-02). Based on the LR-ΔX 2 values, the added prognostic information was larger in the TN and HER2+ subtypes and the overall population than in the HR+/HER2− subtype where, however, it was significant.
ICR classification and pathological response to chemotherapy A total of 1229 breast cancer samples were informative regarding the pathological response to anthracycline-based neoadjuvant chemotherapy. Among them, 283 (23%) displayed pCR, whereas 946 did not. In univariate analysis (Table 3), ICR classification was associated with pCR (43% pCR in ICR4 class vs 20% in the ICR1-3 class, p = 2.88E-10), with an odds ratio (OR) for pCR equal to 2.99 (85%CI 2.24-3.97). The other significant variables were high grade, and HER2+ and TN subtypes. In multivariate analysis, all variables remained significant, including the ICR classification (p = 2.97E-04, logit function). Here too, a continuum existed in term of pCR rate between the four ICR classes, from 14% (ICR1) to 20% (ICR2), 28% (ICR3), and 43% (ICR4). Such correlation between ICR classes and pCR rate was maintained in each molecular subtype separately (Supplementary Table 7).
Based on these results and the MFS results, we postulated that the prognostic value of ICR classification could be mediated, at least in part, by its association with response to chemotherapy. Thus, we analysed its prognostic value in our MFS data set according to the delivery or not of adjuvant chemotherapy, which was informed for 2355 patients, including 1653 HR+/HER2−, 265 HER2+, and 437 TN. As shown in Supplementary Figure 5, in the whole population the prognostic value was present in the chemotherapy-treated group (p = 1.40E-02, log-rank test), but not in the chemotherapy-naive group (p = 0.18); however, interaction was not significant (p = 0.14). Analysis per molecular subtype revealed no prognostic value for ICR classification in both groups in the HR+/HER2− patients and no interaction; by contrast, interaction was significant (p = 4.71E-02) in the TN patients, with strong prognostic value in the chemotherapy-treated group (p = 1.80E-03, log-rank test) and no prognostic value in the chemotherapy-naive group (p = 0.47); in HER2+ patients, there was no significant interaction, with strong prognostic value in the chemotherapy-naive group (p = 2.84E-02, log-rank test) and The immunologic constant of rejection classification refines the. . . F Bertucci et al. no prognostic value in the chemotherapy-treated group despite strong difference in MFS between the two ICR classes (p = 0.21). Thus, these data confirm that, in breast cancer, the ICR4 class is associated with higher response to chemotherapy, particularly in the TN subtype.
DISCUSSION
Here, we show that the transcriptional ICR signature, reflecting an immune antitumour response, defines a continuum of clinically and biologically relevant classes of breast cancers. The signature is associated with classical prognostic features and immunity-related parameters, and with MFS, where it refines the prognostic value of classical prognostic signatures, and with pathological response to chemotherapy. Our approach tested the prognostic and predictive value for our signature in an independent series of samples, thus avoiding the problem of overfitting. We analysed a retrospective pooled set of 8766 pre-therapeutic samples of non-metastatic and invasive primary breast cancers, including 3046 cases informative for MFS and 1229 for pathological response to chemotherapy. Such figures allowed testing our hypothesis in uni-and multivariate analyses in the whole population, but also in each molecular subtype separately. Moreover, the whole-genome transcriptional data allowed testing several other gene signatures and modules relevant to breast cancer.
An immunological continuum was observed with increasing enrichment, from ICR1 to ICR4, of scores reflecting the presence of an antitumour IR, such as lymphocyte infiltrate, expression signatures of immune cell types including T cells, cytotoxic T cells, Th-1 cells, CD8+ T cells, T-helper cells, Tγδ cells, and antigen-presenting cells, and scores of IFNγ pathway activation and of cytolytic activity. Although the molecular subtype is classically associated with immunologic infiltrate, such correlations persisted in multivariate analysis including the molecular subtypes. The level of immune activation captured by the ICR classification positively correlated with classical negative prognostic features of breast cancer, as the scores of standard prognostic signatures (70-gene signature, Recurrence score, and ROR-P score). Here too, a continuum was observed from ICR1 to ICR4, the latter being associated with the poorer-prognosis features. The activation score of TP53 pathway 37 decreased from ICR1 to ICR4, in agreement with the higher rate of inactivating TP53 mutations reported in ICR4, 27 whereas chromosomal instability 38 increased.
Importantly, although associated with poor-prognosis features (including the TN subtype and high-risk defined by classical prognostic signatures), the ICR4 class displayed longer MFS than the three other classes, which showed similar MFS and were pooled. In the whole population, the 5-year MFS was 84% in ICR4 and 78% in pooled ICR1-3, with a HR for relapse equal to 0.64. Multivariate analysis showed that such prognostic value was independent from that of classical prognostic variables and of the three major prognostic signatures of breast cancer, clearly suggesting that IR (reflected by our classification) and tumour cell proliferation (reflected by the three other signatures) provide complementary prognostic information. Of note, the lymphocyte infiltration, relatively simple measure of IR, which was available only for the 999 TCGA samples, including 929 with available follow-up (88 HER2+, 180 TN, and 661 HR+/HER2−), was not associated with MFS in univariate analysis, whereas our ICR classification was (data not shown). In fact, the prognostic value of subtypes, no prognostic signature is marketed to date. However, we included in our prognostic analysis three immune signatures centred on the antitumour response and previously reported as prognostic (IR, LCK, 28-kinase): we confirmed their prognostic value in univariate analysis, which was, however, lost in multivariate analysis when confronted to our ICR classification. Interestingly, in these subtypes also, ICR stratified into prognostic Finally, the ICR classification was also independently associated with pathological response to anthracycline-based chemotherapy, with 43% pCR rate in ICR4 vs. 20% in ICR1-3, and an OR close to 3. Here too, there was a continuum between ICR1 and ICR4 in term of pCR rate, further linking the degree of antitumour response to the degree of chemosensitivity of breast cancer. 41,42 Such correlation was observed in each molecular subtype. Unfortunately, no expression data are currently available in the literature for testing the eventual value of our signature as predictor for response to checkpoint inhibitors.
In conclusion, our 20-gene ICR signature displays robust predictive values for MFS and for pathological response to anthracycline-based chemotherapy in breast cancer. Among aggressive tumours, those with a coordinated antitumour response (ICR4) display better prognosis and better respond to chemotherapy than those without, further reinforcing the fact that immune reaction is an important component of breast cancer and complementary to cell proliferation in prognostic term. Our study displays several strengths: (i) the large size of the series, which represents to our knowledge one of the largest series reported so far analysing the prognostic/predictive value of gene signatures in breast cancer; (ii) the analysis per molecular subtype, demonstrating that the prognostic value is absent in the whole population of HR+/HER2− tumours, but major in the TN tumours; (iii) the persistence of prognostic and predictive values in multivariate analysis including classical prognostic signatures; (iv) the analysis per relapse risk in each molecular subtype, demonstrating that the prognostic value is present in high-risk tumours only; (v) the added prognostic value beyond that provided by the clinical model and each major prognostic signature; (vi) the biological relevance of the signature, which reveals a gradient of antitumour IR in breast cancer and suggests the potential therapeutic interest of stimulating a pro-Th-1 response; (vii) the small number of genes in the signature, which should facilitate its clinical application by using other tests applicable to formaldehyde-fixed paraffinembedded samples such as quantitative reverse transcriptase-PCR. The main limitation is the retrospective nature of our series and associated biases.
The perspectives are therapeutic. Indirectly, the integration of ICR classification with classical prognostic signatures can improve prognostication of breast cancer. For example, identification of poor or good-prognosis cases within operated TN breast cancers should help tailor the systemic treatment: although the 5-year MFS of ICR4 class remains insufficient (83%) and cannot preclude the use of adjuvant chemotherapy, the strong MFS difference suggests that the ICR1-3 patients should need a more aggressive treatment than ICR4 patients. The same is true in HR+/HER2− patients defined as high risk according to the classical prognostic signatures. Such hypothesis should be tested prospectively to identify additional women that might be spared from unnecessary chemotherapy, or perhaps, which can be treated with adjuvant immune-modulatory approaches. More directly, since the antitumour IR seems to play a pivotal role regarding the clinical outcome, the manipulation of genes and/or pathways 11,43 interfering with its development should provide new therapeutic weapons for treating these poor-prognosis tumours. | 2018-11-10T06:20:14.069Z | 2018-10-24T00:00:00.000 | {
"year": 2018,
"sha1": "55304920c0684e5aee77079b86be0ab818a0cd1d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41416-018-0309-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcfd2349382873c3bed355b45614d39158aab676",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216369706 | pes2o/s2orc | v3-fos-license | A Momentum-Based Method for the Mass Concentration Measurement of Pneumatically Conveyed Solid: Computational Verification
A momentum-based method is proposed to measure the mass concentration of solids in pneumatic pipelines. The mechanism relies on the fact that particles flowing with the fluid will exert drag force and thus induce pressure increase on the fluid phase when they decelerate as approaching a static object or space in the flow domain. The pressure increase is expected to increase with the mass concentration of particles and the fluid velocity, and is also affected by the particles size and density. A conversion factor that indicates the extent to which particle momentum is converted to the fluid static pressure increase was defined. Computational verification of the mechanism was carried out with pulverized coal flow around a one-end closed tube with its opening facing towards the incoming flow. The pressure increase was quantitatively comparable to the fluid dynamic pressure when the solid-to-fluid mass ratio was of the same order of magnitude. The conversion factor was found insensitive to particle size over a wide range, which is advantageous since particle size distribution are usually not known in advance. Compared to most existing techniques, current mechanism is more robust and economic in industry field measurements.
Introduction
Pneumatic transportation of bulk solids widely exists in industry. The concentration of solid particles is a crucial parameter for better performance and operation of relevant industrial processes. Taking coal-fired power plant as an instance, pulverized coal particles are conveyed by so-called primary air to the boiler furnace through tens of pipes. Balance of coal flow between these pipes, though difficult to realize, is a necessity to achieve higher combustion efficiency and lower production of NO and NO2. Better control and adjustment of coal flow can be carried out only when the mass flow rate or solid concentration of each pipe is measured and monitored. Various measurement mechanisms and techniques have been proposed and developed in past decades, such as laser-based, gamma-ray-based, electrostatic-based and electric capacitance tomography (ECT) and so on [1][2][3][4]. , but none of them is mature enough to be widely applied. The main drawback of light-based techniques is that the solid concentration is too high for light to transmit and it is very hard to keep the observing windows and lenses clear. As for ECT, on the other hand, the measurement is sensitive to moisture content in the air-coal mixture, which may vary in wide ranges in majority of power plants (e.g. in China) as the type and quality of supplied coals are rarely stable or predictable. Besides, as the measurement should be implemented on each pipe of a single boiler (over twenty to forty pipes totally), the investment based on the current price of forementioned techniques is too high and the benefit can hardly be justified. Therefore, it is quite essential to develop cost-effective, robust and reasonably accurate solid concentration measurement techniques. In this paper, a mechanism that only depends on the basic physical properties (e.g. velocity, density, particle size) and flow characteristics of the gas-solid mixture is proposed and computationally verified.
Mechanism of measurement and computational setup
Pitot tubes are widely used to measure velocity of pure fluid flow. When there are solid particles present in the flow, however, the metering holes of the pitot tube may easily be blocked. According to our experience, on the other hand, before the pitot tube failed to work, the measured pressure difference can be substantially higher than that produced by pure fluid flow, due to the presence of solid particles. Such a phenomenon can be utilized to measure particle concentration.
Measurement mechanism
Assuming the main flow is uniform and the opening of a one-end closed tube (i.e. the metering tube, as schematically shown in Figure 1) is facing towards the incoming flow. For single-phase flow, fluid inside the tube is stagnant with static pressure equal to the total pressure of the main flow: where p0, ρf and u0 are the static pressure, density and velocity of the main fluid respectively. u0 can be measured by certain solid-independent techniques that are not affected by the existence of solid particles. For gas-solid flow with mass concentration of particles given by c, some particles will 'collide' into the tube due to inertia. These particles will decelerate due to drag force exerted by the stagnant fluid, which in turn leads to the increase of the fluid's static pressure. If the tube is long enough, particles will halt before colliding with the end of the tube which maximizes of the 'conversion' of particle momentum to the fluid pressure increment, otherwise only portion is converted. Thus we have: with k defined as the 'conversion' factor, which is a function of the gas and solid density as well as the particle diameter dp. For simplicity, following assumptions are made before further derivation: (1) particles inside the tube do not collide with the side wall of the tube, nor do they collide with each other, (2) fluid inside the tube is stagnant, (3) particles start to decelerate at the opening of the tube, (4) particles are singlesized, spherical and Stokes law of drag applies.
The number of particles that are flying (moving) at moment t can be given by the product of the flight duration Tr, the cross section of the tube A, fluid velocity u0 and particle number concentration n: Assuming a particle starts deceleration at t=0 with u(0)= u0, its velocity at moment t is then given by [5]: The total impulse exerted by a single particle on the fluid through drag force over time 0~Tr is thus given by: The corresponding pressure increase contributed by all particles is: In an extreme condition where the tube is long enough to allow particles to halt before impacting the ending of the tube, equation (6) simplifies into: On the other hand, if these particles are simply gas molecules as the fluid phase, we have: Equations (7) and (8) have defined two extreme conditions where particles are either with high inertia and the tube is substantially long, or with near-zero inertia and behaving like gas molecules. As for real conditions, we can define a general form as: where k is termed as the 'conversion' factor.
Computational setup
The CFD simulations are carried out with Fluent 14.0 in two-dimensional with axisymmetric boundary condition on the axis of the tube and SST k-ω model was adopted. As the diameter of the conveying pipe (usually above 0.5m) is substantially larger than that of the tube, and to reduce computational cost, the width of the flow domain is set to twenty times of the tube radius and shear-free wall boundary condition is applied to the side edge of the domain. Tangential and normal coefficients of restitution for the impact of particles with the side wall of the tube are set to 0.5 for standard cases and varied in other cases to investigate the effect of wall collisions on k. Trap condition is applied to the end wall since in real applications particles will halt and accumulate making rebound hardly happen. Figure 2 shows the fluid trace lines around the tube opening, without/with particles present in the flow.
Fluid flow and particle motion
In the later case, particles are with diameter of 90μm. Fluid swirl motion and reverse flow are seen near the opening of the tube in both cases. Beyond a certain distance (with similar scale as the tube radius) into the tube, fluid is nearly stagnant. The presence of particles does not significantly affect the fluid motion. Particles with dp=1.0μm have low inertia and can not 'penetrate' deep into the tube. They mostly move into the tube with a short distance and then move out following the fluid's reverse flow. Particles with moderate sizes straightly flow into the tube, slow down under the fluid drag force but stop before colliding with the tube end. Particles that are sufficiently big can not substantially decelerate and finally impact with the tube end. A small portion of particles may collide with the side wall of the tube. Figure 3 shows the variation of k with dp for cases with e=0.5. As theoretically predicted in previous sections, k is close to 1.0 for small dp. k firstly increases with increasing dp and reaches a maximum of approximately 1.5, and then decreases with dp until below 1.0 for dp larger than about 145μm. The maximum can be approached only when particles are big enough and the tube is substantially long. For a tube with moderate length, k will decrease as dp exceeds a certain range, due to that the particles can not stop before the tube ending and only portion of their momentum are converted into the pressure increase of the gas. The curve is rather flat for dp ranging from approximately 20μm to 105μm. This is advantageous for industry applications since particle size distribution (PSD) are usually not known in advance. As a matter of fact, PSD of pulverized coal powders vary frequently with coal types and the operation condition of mills etc. Therefore, a more precise and robust measurement can be achieved if the size of particles majorly locates inside the insensitive region of the k~dp curve.
Variation of the conversion factor with particle diameter
The theoretical upper limit of k (i.e. 2) is not reached in current computations. This was suspected to be partially due to that particles may collide with the side wall of the tube and their momentum are reduced due to normal impact and tangential friction. However, as shown in table 2, the comparison between cases 6a and 6b, 9a and 9b showed that this is not true. Besides, either numerical reduction of fluid viscosity (case 5c versus 5a) or increase of particle density (case 5b versus 5a) induced major changes to k. Figure 3. Variation of the conversion factor with particle diameter.
Conclusions
A momentum-based method is proposed to measure solid concentration in pneumatic pipelines. We defined a conversion factor k to indicate the degree to which particle momentum is converted to increase in hydrostatic pressure. The computational verification was carried out with a one-end closed tube facing towards the incoming flow. The pressure increase at the tube internal end as compared to pure fluid flow was monitored and k was calculated. k is found to firstly increase with particle diameter dp to a maximum value in between 1 and 2 and then decreases with dp, as theoretically predicted. k is found insensitive to dp over a wide range, and this can be advantageous for field measurements since particle size distribution is usually not known in advance. As particle size may vary for different industry circumstances, the tube shall be designed accordingly and more complex geometry may be adopted. | 2020-04-27T20:38:31.616Z | 2020-03-21T00:00:00.000 | {
"year": 2020,
"sha1": "635c01be9c8592c5120c96d89fd4646d961e591d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/446/4/042088",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0bcb06e2969e86728ff7a2fa136e388d7425f24f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
268278287 | pes2o/s2orc | v3-fos-license | Comparison of the gene expression profile of testicular tissue before and after sexual maturity in Qianbei Ma goats
Background With long-term research on the reproductive ability of Qianbei Ma goat, we found that the puberty of the male goats comes at the age of 3 months and reaches sexual maturity at 4 months,the male goats are identified as physically mature at 9 months and able to mate. Compared with other kinds of breeds of goats, Qianbei Ma goat is featured with more faster growth and earlier sexual maturity.Therefore, in order to explore the laws of growth of Qianbei Ma goat before sexual maturity(3-month-old)and after sexual maturity (9-month-old). The testicular tissue was collected to explore their changes in morphology through HE staining, the serum was collected to detect the hormone content, and the mRNA expression profile of the testis was analyzed by transcriptomics. In this way, the effect of testicular development on the reproduction of Qianbei ma goats was further analyzed. Results The results showed that the area and diameter of spermatogenic tubules were larger at 9 months than 3 months, and the number of spermatocytes, interstitial cells, spermatogonia and secondary spermatocytes in the lumen of the tubules showed a similar trend. The appearance of spermatozoa at age 3 months indicated that puberty had begun in Qianbei Ma goats. The Elasa test for testosterone, luteinizing hormone, follicle stimulating hormone and anti-Müllerian hormone showed that the levels of these hormones in the serum at age 9 months were all highly significantly different than those at age 3 months (P < 0.01). There were 490 differentially expressed genes (DEGs) between the (|log2(fold change)| > 1 and p value < 0.05) 3-month-old and 9-month-old groups, of which 233 genes were upregulated and 257 genes were downregulated (3 months of age was used as the control group and 9 months of age was used as the experimental group). According to the GO and KEGG enrichment analyses of DEGs, PRSS58, ECM1, WFDC8 and LHCGR are involved in testicular development and androgen secretion, which contribute to the sexual maturation of Qianbei Ma goats. Conclusions Potential biomarker genes and relevant pathways involved in the regulation of testicular development and spermatogenesis in Qianbei Ma goats were identified, providing a theoretical basis and data support for later studies on the influence of testicular development and spermatogenesis before and after sexual maturity in Qianbei Ma goats.
Comparison of the gene expression profile of testicular tissue before and after sexual maturity in Qianbei Ma goats
Background
As an important reproductive organ in male mammals, the testis plays a crucial role in the reproductive process of domestic animals.Testicular development varies in different periods, with large testicular development having a significant effect on spermatogenesis [1].Testicular size was found to be significantly positively correlated with ejaculate volume, sperm concentration and sperm viability and negatively correlated with the percentage of abnormal spermatozoa in bovine, caprine, and porcine animals [2][3][4][5].In addition, testicular size not only affects male reproductive performance but also affects litter size and litters per year [6].Therefore, the development of testes is one of the most effective methods to evaluate the reproductive performance of a sire.Zhang found that the testicular tissue section of 30-day-old Changbai breeder boars had an obvious cord-like structure, only support cells and spermatogonia were observed in the varicocele, the lumen was not formed, and there was no spermatogenesis.The 210-day-old Changbai breeder boars had a substantial increase in the volume of the testis, and the diameter of the ductal lumen was obviously larger, with many round and elongated spermatogonia appearing in the official lumen; at the same time, the number of spermatogonia in the lumen was increased [7].Studies have shown that after sexual maturity, small-tail Han sheep have the highest ejaculate volume, high semen density, high sperm survival rate, long sperm survival time outside, and good semen quality [8].Testicular size is positively correlated with scrotal circumference.Yadav et al. ( 2019 ) found that the scrotal circumference of buffalo was significantly positively correlated with ejaculation volume, sperm motility and motility.The larger the scrotal circumference is, the better the semen quality is [9].In addition, the testis is the site of synthesis of the androgenic steroid testosterone, which is essential for males to maintain secondary sexual characteristics and the skeletal muscular system and initiate spermatogenesis [10].The major role of testosterone in spermatogenesis promotes the reorganization of the structure of the blood-testis barrier to maintain its stability and induces the movement of spermatogonial cells of the thin-lineage stage through the blood-testis barrier to the lumen of the seminiferous tubules [11].Testosterone can alter the expression and post-translational modification of nearly 25 proteins, including DNA repair and RNA splicing, to maintain support cell spermatocyte adhesion and enhance the associated adhesion proteins between support cells and immature germ cells, preventing premature separation of round spermatocytes from support cells [12,13].The amount of testosterone secreted by male animals varies at different times during growth and development.Previous studies have demonstrated that testicular development has a major influence on the production and secretion of testosterone, which gradually increases from low to the highest level before and after sexual maturity [14].Wei found that with age in male Liaoning cashmere goat kids the secretion level of testosterone in peripheral serum also increased gradually, and the secretion level after 30 days of age was divided into 2. 23, 5, 7.94, 10.41, 13.45, and 16.16 times that of the neonatal level [15].Testicular development has a strong influence on the production activities of subsequent male progenies; therefore, it is very important to investigate the impact of testicular development on male reproductive development.
The Qianbei Ma goat is one of the three excellent local goat breeds in Guizhou and displays roughage tolerance, strong disease resistance, good grouping, ease of keeping in captivity, a docile temperament, and strong adaptability [16],It has greater development potential and value.Based on the importance of testes for Qianbei Ma goat reproduction,In this experiment, we analyzed the mRNA expression profiles of the testes of Guizhou Qianbei Ma goats at 3 months of age (before sexual maturity) and 9 months of age (after sexual maturity) by transcriptomics, screened the differentially expressed genes in different periods, and explored the functions of the differentially expressed genes to provide basic data for the enhancement of the reproductive ability of Guizhou Qianbei Ma goats.
Animal ethics
All animal experiments were carried out in strict accordance with the instructions provided by the Animal Care and Use Committee of the Laboratory Animal Ethics of Guizhou University (No. EAE-GZU-2021-E025, Guiyang, China; 30 March 2021).Effective procedures were implemented to reduce pain and distress, and overall health, zoonotic infections, and pathogenic microbial infectious diseases were all thoroughly controlled and monitored.
Experimental animals
Four healthy Qianbei Ma billy goats were randomly selected at 3 months of age (average weight 17.21 kg) and 9 months of age (average weight 32.71 kg) at Fuxing Herd Co., in Zunyi City, Guizhou Province, China.Through intravenous injection with propofol (a dose of 0.5 mL/ kg), the goats were stunned.Then, the veterinarian quickly dissected the scrotum with scissors and forceps to obtain a testicular tissue sample.The semen parameters collected before the testicles were taken showed that the 9-month-old goat group displayed normal spermatogenesis.One of the testes was randomly selected for RNA isolation and histological analysis.Testicular samples were immediately cut into small pieces, transferred to cryogenic vials, and stored in liquid nitrogen.
The samples were shipped to LC Sciences in Hangzhou, China, for subsequent library construction and sequencing.
Morphology and tissue evaluation
Sections of testicular tissue were made.The sections were routinely deparaffinized with xylene, dewaxed in water for 3 min each with 100% ethanol, 95% ethanol, 80% ethanol and 70% ethanol, stained with hematoxylin for 5 min, washed with water, stained with 1% hydrochloric acid in ethanol for 5 s, rinsed with tap water for 20 min, stained with eosin for 30 s, washed with water, dehydrated for 3 min each with 95% and 100% ethanol, sealed with neutral gum after drying.CaseViewer2.4software was used to scan the image and save it.Image-Pro Plus 6.0 analysis software was used to measure the area and diameter of the intact convoluted seminiferous tubules in the sections (standard unit: millimeters).At last, counting different types of cells to calculate cell number per unit area.
ELISA detection
Blood was collected from the goats by the neck blood collection method and centrifuged at 3000 r/min for 10 min at 4 °C, and the serum was separated.The levels of testosterone (T), luteinizing hormone (LH), follicle stimulating hormone (FSH) and anti-Müllerian hormone (AMH) were determined using the goat ELISA kit provided by Zhiqin Tiancheng Biological Company.
Total RNA extraction, purification and sequencing
High-quality RNA was isolated from testicular samples using the TRIzol method according to the manufacturer's (Invitrogen) instructions.The concentration of RNA was determined by a NanoDrop spectrophotometer, and the integrity of the RNA was determined with an Agilent 2100 Bioanalyzer.The PCR products were purified (AMPure XP system), and the library quality was assessed on an Agilent Bioanalyzer 2100 system.The Illumina NovaSeq platform was selected to sequence the library.Raw reads were filtered, and Q30 and GC content were calculated to obtain clean reads for subsequent analysis.Differential expression analysis of the two groups was performed using DESeq2, and | log2 (Fold-Change)| >1 and p value < 0.05 were set as thresholds for significantly different expression.
GO and KEGG enrichment analysis
Gene Ontology (GO) enrichment analysis of DEGs and statistical enrichment analysis of DEGs in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were performed using the online analysis tool (http://www.bioinformatics.com.cn/).GO terms and KEGG pathways were considered significantly enriched when P < 0.05.
Quantitative real-time PCR
To verify the reliability of the sequencing analysis results, the relative abundance of mRNAs of the six selected transcripts was determined by qRT-PCR.Based on the sequences in the NCBI database, the corresponding primers were designed using Primer Premier 5.0 (Premier Inc., Canada), and the primer sequences and parameters are shown in Table 1.Primer synthesis was performed by Shanghai Sangong Bioengineering Co. cDNA was used as the amplification template for qRT-PCR based on the SYBR Green method.The qRT-PCR amplification system (10 µL) contained 0.5 µL of upstream and downstream primers, 0.5 µL of template, 5 µL of 2X SYBR mastermix, and 3.5 µL of enzyme-free water.Three replicates were set up for each sample, and a blank control of 3 replicates was set up at the same time.qRT-PCR was carried out in accordance with the following conditions: 95 °C for 2 min; 95 °C for 15 s, the corresponding annealing temperature (Table 1) for 15 s, 72 °C for 1 min, and fluorescence signal acquisition after the completion of extension for 40 cycles.Fluorescence signal acquisition was performed after completion of extension.β-Actin was used as an internal reference.
Statistical analysis
All data are presented as means ± SD of three biological replicates and three technical replicates to ensure the accuracy of the experimental data.Data was processed by SPSS(V25.0)software.The difference between the two groups was determined by t test, and P < 0.05 was considered statistically significant.
Histomorphometric analysis of the Qianbei Ma goat testes before and after sexual maturity
The testicular tissue of the Qianbei Ma goat consisted of spermatogenic tubules and testicular interstitial parts (Fig. 1).In terms of the size of the spermatogenic tubules, the area and diameter of the tubules were larger at 9 months than at 3 months.There was an increase in the number of spermatocytes, interstitial cells, spermatogonia, and secondary spermatocytes in the lumen of the tubules compared to at 3 months of age.Spermatozoa were observed in the lumen of the spermatogonial tubules at 3 months of age, indicating that puberty had begun in the Qianbei goats.The number of interstitial cells in the testicular tissue increased at 9 months of age compared to 3 months, while the number of Sertoli cell in the testis decreased at 9 months of age compared to 3 months.There was also an increase in secondary spermatogonia, but the difference was not significant (Table 2).
Expression levels of reproductive hormones before and after sexual maturity
The ELISA results showed (Fig. 2) that the content of each hormone level at 9 months of age was significantly higher than that at 3 months of age (P < 0.01).The testosterone hormone content was 209.85 ng/ml at 3 months of age and 301.90 ng/ml at 9 months of age (Fig. 2A); luteinizing hormone was secreted at 1.71 mIU/ml at 3 months of age and 1.88 mIU/ml at 9 months of age (Fig. 2B); and follicle stimulating hormone was secreted at 11.10 mIU/ ml at 3 months of age and 14.28 mIU/ml (Fig. 2C).The anti-Müllerian hormone content was 89.30 ng/ml at 3 months of age and 150.70 ng/ml at 9 months of age (Fig. 2D).
Total RNA quality testing and data quality control analysis
Eight samples were selected from the 3-month-old control group (M1, M2, M3, M4) and the 9-month-old experimental group (S1, S2, S3, S4).By sequencing, an average of 44,687,384 bases was obtained.After quality control, an average of 43,202,260 clean reads were generated, which for the experimental group was between 4,198,8148 and 4,631,0130 and for the control group was between 39,167,034 and 47,103,696.The average Q30 values for the experimental and control groups were 93.25% and 93.24%, respectively, satisfying the 90% requirement for Q30.The average GC content mean values of the experimental and control groups were 51.85% and 52.14%, respectively, indicating that the quality of the sequencing results was good enough to support all analyses of the expression profiles obtained from the sequencing data (Table 3).
Differential expression analysis of Qianbei Ma goat testes before and after sexual maturity
To explore the role of mRNA in the regulation of testicular development in Qianbei Ma goats, we assessed a total of 27,830 genes, and the detected genes were differentially expressed using |log2Foldchange| > 1 and P value < 0.05 as the criteria.A total of 490 genes were differentially expressed, of which 233 genes were upregulated and 257 genes were downregulated (Fig. 3).
GO and KEGG enrichment analysis
The 490 differentially expressed DEGs screened were analyzed by GO functional clustering and classified into biological processes (BPs), cellular components (CCs) and molecular functions (MFs), with a total of 23 semantics enriched.Among them, biological process components accounted for 26.53%, cellular components accounted for 22.45%, and molecular functions accounted for 51.02%.The differentially expressed genes were mainly enriched in molecular functional biological processes (Fig. 4A).The upregulated differentially expressed genes were mainly involved in molecular functions, of which the prominent enriched terms were the peptidase activity pathway and transferase activity pathway (Fig. 4B).The downregulated differentially expressed genes participated in biological processes, and the prominent enriched terms were extracellular region functions (Fig. 4C).
The KEGG analysis showed that 194 pathways were enriched by 490 DEGs in both groups.When a P value < 0.05 was chosen as the threshold for significant enrichment, we identified the 36 most significantly enriched pathways (Fig. 5A); these pathways included serotonergic synapses, steroid hormone biosynthesis, neuroactive ligand-receptor interactions, relaxin signaling pathway, NF-KB signaling pathway, and others.By performing KEGG enrichment analysis on the upregulated DEGs, we identified 23 pathways (P < 0.05), of which the main enrichment was in the disease pathway section (Fig. 5B).Performing KEGG enrichment on downregulated DEGs, we identified 36 pathways (P < 0.05) that were predominantly enriched in the organic systems section (Fig. 5C).To screen for the effects of testicular development and androgen secretion in goats, combining GO and KEGG analyses, we identified several pathways associated with cell growth and hormone secretion.These were neuroactive ligandreceptor interaction, calcium signaling pathway, steroid hormone biosynthesis,and Wnt signaling pathway.and the DEGs LHGCR is highly enriched in steroid hormone biosynthesis(Fig.6).In addition, we also obtained upregulated DEGs CRB2, WFDC8, PRSS58 and downregulated DEGs DKK1 and ECM1.
RNA-seq data validation
To validate the RNA-Seq results, the relative abundance of four upregulated genes, LHCGR, CRB2, PRSS58, and WFDC8, and two downregulated genes, DKK1 and ECM1, were examined by RT-qPCR, and the RT-qPCR expression patterns of the selected genes were consistent with the results of the RNA-Seq analyses (Fig. 7).
The PCR products were subjected to Sanger sequencing, verifying that the selected genes had the correct cyclization linker site.These results indicated that the genes sequencing data were reliable, and could be used for further analysis.
Discussion
Goats are economically important animals, and the size and development of testes in males directly determine fertility, influencing the economic benefits of the goat industry [17,18].In general, there is a complex regulatory relationship between genotype and biological phenotype [19].mRNA is one of the important forms of RNA in living organisms, which is formed by the transcription of coding genes followed by shear modification and is subsequently transported to the cytoplasm to be translated into functional proteins to perform biological functions [20].Therefore, understanding the regulation of genes is crucial in the growth and development of animals.In recent years, RNA-seq technology and bioinformatics have been utilized to analyze the expression levels of all gene transcripts at different time points.These genes are up-or downregulated at different levels of proteins and metabolites, which can induce phenotypic changes in animals.For example, 302 DEGs were screened for seasonal reproduction in male lionhead geese, and the HOX gene was determined to be important for the regulation of testicular development between the nonbreeding and breeding periods in lionhead geese [21].A total of 5068 DEGs were screened for testicular sexual maturation in large white pigs and Tongcheng pigs, and candidate genes such as TRIP13, NR6A1, STRA8, PCSK4, ACRBP, TSSK1, and TSSK6 were selected regarding testicular maturation [22].
The testicular seminiferous tubules are important sites for spermatogenesis and contain cells such as spermatogonia, spermatocytes, spermatids, and supporting cells [23].Our analysis of testicular tissue sections before and after sexual maturity in Qianbei Ma goats revealed that the area and diameter of the seminiferous tubules increased with age, and the number of supporting cells changed only slightly.Some researchers have found that testicular supporting cells usually mature before the primiparous stage and lose their proliferative ability after maturation, after which they are maintained at a dynamic level [24].We observed that spermatozoa had already appeared in 3-month-old slices, while other studies have shown that sexual maturity can be reached at 4 months of age in Qianbei Ma goats [25], indicating that puberty has already begun in the Qianbei Ma goats.This result is consistent with that of Ren et al. [26], who observed changes in testicular tissue morphology in 3-month-old dairy goats.Testosterone (T) is an important steroid hormone that regulates growth and development and sperm synthesis in males [27].Luteinizing hormone (LH) promotes testicular development and eventual maturation of spermatogenesis [28], and follicle-stimulating hormone (FSH) has a major role in males by acting synergistically with testosterone to maximize sperm production [29].Studies have shown that AMH levels are positively correlated with total sperm count, concentration and positive motility [30].In this study, the serum levels of these hormones were determined at two different months of age in Qianbei Ma goats, and the levels at age 9 months were all extremely significantly higher than those at age 3 months (P < 0.01), which was similar to the study of Zhang [7].This indicates that before puberty, the reproductive organ is not well developed and cannot secrete more reproductive hormones.After sexual maturity, the reproductive organs of Qianbei Ma goats gradually developed completely, and most of the reproductive cells were Fig. 6 The KEGG steroid hormone biosynthesis able to secrete more reproductive hormones, thus gaining fertility.
We further investigated the molecular mechanism of testicular development before and after sexual maturity in Qianbei Ma goats using RNA-seq and identified a total of 27,830 genes.A total of 490 DEGs were detected in the comparison of the 2 groups, of which 233 were upregulated and 257 were downregulated.To explore the role of DEGs in the development of testes before and after sexual maturity in Qianbei Ma goats, we performed functional annotation by GO, which revealed that the differentially expressed genes were enriched in peptidase activity.Peptidase activity is an important physiological activity in the development of organisms [31].It has been shown that acid peptidase activity released from in vitro cultured porcine embryos is positively correlated with late development and embryo quality [32].Serine proteases (PRSS), which have a nucleophilic Ser residue at the active site, constitute nearly one-third of all known proteases, many of which have been identified as testisspecific proteases and play an important role in sperm development and male reproduction [33][34][35].PRSS58, is a gene in a family of serine proteases,May be involved in the biological functions of the testes.CRB2 is a member of a family of cysteine-and glycine-enriched proteins that mediate protein-protein interactions with significant roles in cell growth, expansion, and differentiation [36].WFDC8 is whey acidic protein tetrasulfide bond core protein 8, which is one of the proteins that regulates sperm maturation and contributes significantly to the immune function of the male reproductive tract [37,38].The upregulated differential gene PRSS58, which was significantly enriched in enzyme peptide activity by GO functional analysis in this study, may also be a potential candidate gene in sperm development.The GO-enriched downregulated DEGs were ECM1 and DKK1.ECM1 is a glycoprotein and a sperm detection marker product [39].DKK1 is a WNT signaling antagonist, and the WNT signaling pathway contributes to sperm development through the promotion of ovarian development and inhibition of testicular development in the early gonad, which plays an important regulatory role in mammals [40].Overall, GO term-enriched DEGs may be involved in regulating the biological processes of testicular development and spermatogenesis in Qianbei Ma goats.LHCGR is a specific receptor for luteinizing hormone [41], which acts to mediate the synthesis and secretion of androgens and growth factors by testicular mesenchymal stromal cells in male animals [42].Sun et al. (2011)correlated polymorphisms of GnRH and LHCGR genes with semen quality in cattle and found that the sperm density of individuals with the TT genotype at the G651656T locus of the LHCGR gene was significantly higher than that of the GT genotype [43].Liu et al. (2009) analyzed the polymorphisms of the 5′UTR and three exons of the LHβ gene in Simmental and Charolais cows by PCR-SSCP and correlated them with semen quality traits [44].They found that there was an SNP in exon 2, and the mutation was significantly correlated with the deformity rate of frozen semen and ejaculation volume, which proved that the LHCGR gene could be a candidate gene influencing semen quality.KEGG enrichment results showed that LHCGR was significantly enriched in pathways related to testicular development and androgen secretion processes, such as neuroactive ligand-receptor interaction, calcium signaling pathway, cortisol synthesis and secretion, cAMP pathway, and ovarian steroid synthesis and secretion.For example, the classical cAMP signaling pathway has an important role in promoting testosterone synthesis and cell proliferation and regulating cellular homeostasis
Conclusion
The dynamic changes in testicular development and spermatogenesis between pre-and postsexual maturity in Qianbei Ma goats are characterized by the sex ligandreceptor interaction pathway, calcium signaling pathway and response to steroid hormone biological processes.PRSS58, ECM1, WFDC8 and DEGs of LHCGR may be the key genes regulating the molecular mechanisms of sexual maturation in Qianbei Ma goats.However, their specific regulatory mechanisms still need to be further studied.
Fig. 3 Fig. 4
Fig. 3 Differential mRNA expression between experimental and control groups.Purple dots indicate genes that were significantly 257 up-regulated, orange dots indicate genes that were significantly 233 down-regulated, while blue dots indicate genes with no difference
Fig. 5
Fig. 5 KEGG enrichment analysis of 3-month-old and 9-month-old DEGs in Qianbei hemp sheep; (A) ALL DEGs KEGG enrichment analysis plot, horizontal coordinates are KEGG pathway terms, vertical coordinates are genes enriched by KEGG terms; (B): UP DEGs KEGG enrichment plot, (C): DOWN DEGs KEGG enrichment plot
Table 1
Detailed information of each primer
Table 2
Histomorphologic physicochemical indices of testis
Table 3
Quality control and sequencing data statistics | 2024-03-09T05:26:11.026Z | 2024-03-08T00:00:00.000 | {
"year": 2024,
"sha1": "ac3a35472a068620036ce4f1a7f04631ab80e37d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "16457f877f089d0adaba75f56e795e8ded708e8f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250289079 | pes2o/s2orc | v3-fos-license | Effect of Radical Laparoscopic Surgery and Conventional Open Surgery on Surgical Outcomes, Complications, and Prognosis in Elderly Patients with Bladder Cancer
Background . Bladder cancer is a common malignant tumor of the urinary system in the clinic. It has multiple lesions, easy recurrence, easy metastasis, poor prognosis, and high mortality. Objective . The aim of this study is to investigate the impact of laparoscopic radical cystectomy (LRC) and open radical cystectomy (ORC) on the surgical outcome, complications, and prognosis of elderly patients with bladder cancer. Materials and Methods . One hundred elderly bladder cancer patients who underwent surgery in our hospital from June 2019 to June 2021 were selected for the retrospective study and were divided into 50 cases each in the ORC group and the LRC group according to the different surgical methods. The ORC group was treated with ORC, and the LRC group implemented LRC treatment. The differences in surgery, immune function, recent clinical outcomes, and complications between the two groups were observed and compared. Results . The mean operative time, mean intraoperative bleeding, intraoperative and postoperative transfusion rate, and transfusion volume of patients in the LRC group were statistically significant when compared to the ORC group. The differences in the meantime to resume eating, time to get out of bed, mean number of days in hospital after surgery, and the amount of postoperative numbing analgesics used by patients in the LRC group after surgery were statistically significant compared to the ORC group ( P < . 05). There was no statistically significant difference in the comparison of immune function between the two groups before surgery ( P > 0 . 05), while the comparison of CD8 + and B cells 1 week after surgery of the LRC group was significantly better than that of the ORC group ( P < 0 . 05), and the operation time of the LRC group was longer than that of the ORC group ( P < 0 . 05). Statistical analysis of postoperative complications showed that the overall incidence of postoperative complications in the LRC group was significantly lower than that in the ORC group (16.67% vs. 46.67%) ( P < 0 . 05). Conclusion . LRC has less surgical trauma and intraoperative bleeding, faster postoperative recovery, and fewer postoperative complications, providing some reference for clinical surgery for elderly bladder cancer patients.
Introduction
Bladder cancer is one of the ten most common clinical malignancies and is the most common malignancy of the urinary tract, accounting for the sixth-highest incidence of bladder cancer worldwide [1,2]. Worldwide, the incidence of bladder cancer is the highest in the mountains in Southern Europe, Western Europe, and North America and is signi cantly higher than in poorer regions such as Central Africa [3]. In recent decades, the incidence and mortality of bladder cancer in China have been on the rise, in uenced by factors such as the increasing ageing of the Chinese population, increased environmental pollution, and an increasing number of smokers, and its upward trend aspect is shown to be higher in men than in women [4]. Bladder cancer patients have symptoms such as hematuria, the most di cult urination, urinary retention, urinary tract obstruction, etc. e disease a ects the patient's life and health to a certain extent. About 90% of bladder cancer patients have the initial clinical manifestation of hematuria, usually painless, intermittent, gross hematuria, and sometimes microscopic hematuria [5]. e traditional surgical method has relatively large blood loss, large trauma, high postoperative recurrence rate, and very slow recovery [6]. With modern clinical medical technology development, minimally invasive technology is widely used in clinical practice. Minimally invasive surgery for bladder cancer patients minimizes complications and improves outcomes [7]. ORC is highly effective but has drawbacks such as high trauma, high blood loss, and slow recovery. More complications and increased mortality often follow ORC, so previously elderly patients may have been directed to conservative treatment modalities [8]. Numerous studies have confirmed that LRC can more fully expose the surgical field and improve surgical precision, thereby reducing the amount of intraoperative bleeding and the likelihood of transfusion in patients [9]. e reduced surgical trauma also reduces the incidence of perioperative complications and shortens the time patients have to be out of bed and eat regularly, reducing the length of stay in hospital [10]. LRC is now widely accepted by most urologists as a minimally invasive treatment modality [11]. is study observed the clinical effects of ORC and LRC after treatment and provided a reference for clinical selection of appropriate surgical methods.
Research Subjects.
All records on the identity of patients included in this study will be kept in the hospital as required, and all records on the identity of patients will not be disclosed in the public reporting of the study results, and patients will be informed of the test results in strict accordance with the standard operation of the experimental procedures. is study has been approved by the ethics committee of our hospital. According to the surgical procedure, one hundred elderly bladder cancer patients who underwent surgery in our hospital from June 2019 to June 2021 were selected for the retrospective study and were divided into 50 cases each in the ORC and LRC groups. Indications for laparoscopic surgery were as follows: (1) No severe cardiac or pulmonary impairment. (2) Normal coagulation function. (3) Mild abdominal distension. (4) Preoperative consideration of malrotation of the bowel and doubtful diagnosis.
Inclusion and Exclusion Criteria
e inclusion criteria were as follows: (1) those diagnosed with bladder tumor by cystoscopy and biopsy [12]; (2) meeting the surgical indications for radical cystectomy, invasive bladder cancer with TNM stage of T2-4a, N0-X, M0, high-risk non-muscle invasive bladder cancer T1G3 (high-grade) tumor; (3) in situ cancer that is ineffective with BCG treatment, recurrent NMBC, etc., performing LRC or ORC; (4) age ≥65 years; (5) with clear indications for surgery, willing to accept surgical treatment; and (6) Medical records and follow-up information are complete.
Exclusion Criteria.
e exclusion criteria were as follows: (1) those with serious lesions of other vital organs that cannot tolerate surgery, such as heart, lung, liver, and kidney insufficiency; (2) those with abnormal or impaired coagulation function, poorly controlled preoperative blood glucose and blood pressure, and incomplete relevant records; (3) patients with other contraindications to laparoscopic surgery or suspected intestinal strangulation are directly recommended for open surgery, patients with a combined congenital diaphragmatic hernia, abdominal cleft, or umbilical bulge, and those with incomplete clinical data; (4) those with other systemic malignancies, urinary tract infections, and stones; (5) patients with a history of previous urinary system surgery; and (6) those who do not actively cooperate with treatment or who are missing follow-up.
Methods.
Patients in both groups perfected cardiovascular and liver, and kidney function and other related tests and excretory urography before surgery, improving the patients' physical condition; if there were water and electrolyte balance disorders, severe anemia, etc., they should be corrected first and treated with blood transfusion if necessary. Prophylactic application of antibacterial drugs 3 days before surgery, semiliquid diet 2 days before surgery, liquid diet 1 day before surgery, and appropriate nutrients should be administered via intravenous supplementation, the night before the operation. e ORC group was treated with ORC and the LRC group was treated with LRC as follows. All patients in this study participated in the study and none of them dropped out halfway.
ORC Treatment.
Endotracheal intubation was performed after anesthesia became effective, and the patient was placed in the supine position with routine disinfection of the surgical field skin and a sterile surgical towel. A longitudinal incision was made from the middle of the suprapubic bone to the umbilicus, about 15-20 c long, and the skin, subcutaneous tissue, and anterior rectus abdominis sheath were incised sequentially, and the rectus abdominis muscle was separated to expose the anterior bladder space. e anterior bladder wall was bluntly and sharply freed, the posterior pubic space was entered, and the pubic prostatic ligament was visible, which was cut off near the pubic bone, and the deep dorsal penile vascular plexus was sutured and cut off. e top of the bladder is freed, and the peritoneum at the top of the bladder is separated from the bladder with an electric knife. Following this, the bottom of the bladder was released deeper, and the bilateral ureters were located and released into the bladder, separated, and severed. e ureters were ligated distally near the pelvis. e vas deferens and seminal vesicles are visible. e vas deferens and seminal vesicle artery are ligated and cut off, and the vas deferens and seminal vesicle artery are continued to the tip of the prostate, taking care not to damage the rectum. e left and right walls of the bladder were separated, and the pelvic fascia was seen distally. e lateral ligaments of the bladder were treated separately, and the neurovascular bundle was seen in the posterior aspect of the prostate, which was separated by clamping and cutting. e tip of the prostate is clamped with right-angle forceps; the urethra is cut and ligated immediately against the tip of the prostate; and the tissues of the bladder, seminal vesicles, and prostate are removed (women should include the uterus and its vicinity and part of the anterior vaginal wall). e lymph nodes and adipose tissue around the parietal iliac vessels and the occluded foramen are removed. e ureters on both sides were separated in the direction of the iliac vessels, the bilateral ureters were fully freed, the left ureters were pulled to the right side via the anterior sacrum, and 6F single J tubes were left in the bilateral ureters as stent tubes to drain the urine. Small incisions were made on the right lower abdominal wall to drain the bilateral ureters, and 4-0 absorbable sutures were used to fix the ureteral ends and the single J tube, and a right abdominal wall ureteral skin stoma was performed. After perfect hemostasis, one abdominal drainage tube was left in place and fixed with sutures, the anterior rectus abdominis sheath, subcutaneous, and skin layers were intermittently sutured, and the incision was wrapped with a sterile gauze, and the ureteral skin stoma was connected to the bag for drainage.
LRC Treatment.
Under general anesthesia and endotracheal intubation, the patient was placed in a head-low, foot-high position, supine with the pillow removed and the buttocks padded in a little anti-arch position, with the shoulder block fixed, and the skin of the surgical area (including the perineal area) was routinely disinfected, and cloth towel sheets were laid. e first puncture point was located below the umbilicus, a small circular incision was made and separated to the anterior rectus abdominis sheath, the skin was lifted with force with a cloth towel clamp, and the Veress needle was placed into the abdominal cavity, the CO 2 gas was filled, and the pressure was maintained at 12∼l5 mmHg. After the artificial pneumoperitoneum was established, the sub-umbilical incision was disposed of with an 10 mm Trocar, the incision was fixed with sutures, and the remaining four trocars were placed under direct laparoscopic view. e second and third puncture points were located next to the right and left rectus abdominis muscles, approximately 2-3 cm below the umbilicus, respectively; the fourth and fifth puncture points were located at the McKinsey point and the anti-McKinsey point, respectively; a 12 mm Trocar was placed at the third puncture point, and a 5 mm Trocar was placed at the remaining puncture points. e operator stands on the patient's left side, pushes the intestinal canal cephalad, cuts the retroperitoneum and vascular sheath along the surface of the right external iliac artery, removes the lymph nodes and fatty tissue around the parietal iliac vessels and the closed foramen, paying attention to the protection of the closed foramen nerve, and removes the lymph nodes and fatty tissue on the left side in the same way. e peritoneum of the posterior wall of the bladder was incised with an ultrasonic knife at about 2 cm above the rectal bladder recess, the vas deferens was freed bilaterally, and the Hm-o-Lok was clamped and disconnected with an ultrasonic knife. e anterior rectal fascia was opened and the prostate was separated from the anterior rectal wall by separating the posterior part of the prostate to the prostate near the urethra. e left and right walls of the bladder were then separated, the pelvic fascia was separated distally and exposed, the anterior wall of the bladder was then released, the median umbilical ligament, the paramedian ligament and the retroperitoneum were severed, the anterior bladder space was bluntly separated downward, the tip of the prostate was fully exposed, and the dorsal deep venous complex of the penis was released and ligated with 2-0 Vicryl absorbable sutures. After dissecting the dorsal deep vein complex, the urethra was removed immediately adjacent to the prostate tip, and the urethra was dissected by Hem-o-Lok clamping near the bladder neck and the bladder and prostate were completely disconnected and excised, with complete hemostasis of the wound. e right ureter was tracked from the retroperitoneum to the left side. e bilateral ureteral orifices were sutured externally in a papillary shape and fixed to the extraabdominal oblique tendon membrane subcutaneously, respectively. A single 6-J tube was left in place, and the single-J tube was drained and connected to a bag for drainage. Complete hemostasis and fill with a hemostatic gauze, leave one extraperitoneal drainage tube in place and fix it properly, suture the abdominal wall incision sequentially, dress the incision with sterile dressing and wrap the ureteral skin stoma with an oil gauze. In female patients, the uterus and its adnexa should be removed laparoscopically and then cystectomy should be performed as mentioned above.
Observation Index.
e observation indexes are as follows: ① Surgical indexes: we mainly observed the intraoperative and postoperative conditions, postoperative complications, and tumor treatment effects in both groups. e intraoperative and postoperative conditions included operation time, bleeding volume, blood transfusion volume, time of anal venting, time of resuming feeding, time of getting out of bed, and days of hospitalization. ② Immune indexes: CD8+, B cells, and NK cells of patients were detected before and 1 week after surgery.
Statistical Analysis.
All statistical data in this study were entered into Excel software by the first author and the corresponding author, respectively, and the statistical processing software was SPSS25.0 for calculation. Repeated measures analysis of variance between groups was used to measure the measurement expressed as mean ± standard deviation (X ± S). χ 2 tested count data are expressed as a percentage (%). Univariate and Logistic multivariate regression analysis was used to compare the influencing factors, and the risk factors with significant differences were screened. Correlation test using logistic regression linear correlation analysis. Included data that did not conform to a normal distribution was described by M(QR), using the Mann-Whitney test. All statistical tests were two-sided probability tests. e statistical significance was P < 0.05.
Comparison of General Data.
e comparison of general data such as gender, mean age, tumor diameter, and tumor Evidence-Based Complementary and Alternative Medicine 3 type between the two groups of patients was tested without significant statistical differences (P > 0.05). See Table 1.
Comparison of Surgery.
Compared with the ORC group, there were significant differences in the average operation time, average intraoperative blood loss, intraoperative and postoperative blood transfusion rate, and blood transfusion volume between the LRC group and the ORC group (P < 0.05). Compared with the ORC group, the meantime, time to get out of bed, the mean postoperative hospital stay, and the dosage of postoperative numbing analgesics were significantly different (P < 0.05). See Figure 1.
Comparison of Immune Function.
ere was no significant difference in the immune function between the two groups before surgery (P > 0.05), but the CD8 + and B cells after 1 week of surgery were significantly different, and the LRC group was better than the ORC group, with statistical significance (P < 0.05). ere was no significant difference in NK cells after 1 week of operation (P > 0.05). See Figure 2.
Comparison of Recent Clinical Efficacy.
e operation time of the LRC group was longer than that of the ORC group.
e postoperative HGB decrease, postoperative bowel function recovery time, pelvic drainage tube indwelling time, and postoperative hospital stay in the LRC group were all shorter than those in the ORC group, and the differences were statistically significant (P < 0.05)). See Figure 3.
3.5.
Complications. Statistical analysis of the incidence of postoperative complications showed that the total incidence of postoperative complications in the LRC group was significantly lower than that in the ORC group. (P < 0.05). See
Discussion
Open radical cystectomy is the standard method for treating patients with bladder cancer, and its therapeutic effects are universally recognized, but the surgical procedure is significantly traumatic for the patient and not only the amount of bleeding but also the postoperative complications adversely affect the patient's recovery [13]. erefore, patients need a longer period of time to recover after the completion of the surgery, coupled with the degeneration of physiological organ functions, decreased immunity, and more underlying diseases in elderly patients, who have relatively low tolerance for open surgery [14]. erefore, not only the surgical risk is high but also the postoperative complication rate is higher. In contrast, laparoscopic surgery is characterized by less trauma, less bleeding, fewer complications, and higher safety [15]. ere is little disturbance to other organs in the abdominal cavity during surgery, and the possible irritation and contamination of the abdominal cavity caused by air are effectively avoided [16]. e implementation of laparoscopic surgery without changing the principles and results of traditional surgery has been widely accepted in medicine because it improves patient tolerance and facilitates recovery after surgery [17].
e LRC group in our study was superior to the ORC group in terms of intraoperative bleeding, blood transfusion rate, postoperative feeding time, time to get out of bed, analgesic requirement, and postoperative hospital stay, showing the minimally invasive advantages of laparoscopic surgery. Elderly patients have many concomitant underlying diseases and reduced organ physiology, making a radical resection of bladder cancer performed on them poorly tolerated and risky [18]. erefore, LRC is particularly important for elderly patients. Laparoscopic surgery is less invasive, has less impact on intra-abdominal organs and physiological functions, is less inflammatory, and interferes less with the immune function of the patient, which is more conducive to the stabilization of the general condition of elderly patients [19]. Our experience is that the patient's vital organ function should be fully evaluated before surgery. e cardiac function, pulmonary function, liver function, and nutritional status should be fully regulated, with better control of blood glucose and blood pressure [20]. e surgeon should have good experience in laparoscopic radical cystectomy for bladder cancer. e operating time should be minimized to reduce the effect of CO 2 on the cardiopulmonary function under the premise of low abdominal pressure and to ensure the radical effect of tumor treatment [21]. Intraoperative hypothermia is closely related to the occurrence of postoperative complications, such as incisional infection, coagulation, or circulatory dysfunction, and even increases patient's mortality [22]. It is particularly important to prevent the occurrence of hypothermia intraoperatively, and intraoperative warming facilities are routinely applied in our hospital; due to the small incision and mild pain of laparoscopic surgery, patients can be encouraged to get out of bed early after surgery and cough and breathe frequently, which is beneficial to prevent deep vein thrombosis and pulmonary complications [23]. All these factors can improve the prognosis of elderly patients, and with adequate preoperative preparation, laparoscopic radical bladder cancer surgery for elderly bladder cancer patients is safe and feasible, with the advantages of less trauma, faster recovery, and fewer complications [24]. e difference in CD8 + and B-cell comparison 1 week after surgery in our study was significant and better in the LRC group than that in the ORC group, indicating that laparoscopic radical cystectomy is more effective in elderly bladder cancer patients. A large number of studies at home and abroad have shown that LRC has the advantages of less intraoperative bleeding, lower blood transfusion rate, faster recovery of postoperative gastrointestinal function, and shorter postoperative hospital stay compared with ORC, but the operation time is relatively longer. Many studies have confirmed that RC can more fully expose the surgical field and improve surgical precision, thus reducing the possibility of intraoperative bleeding and blood transfusion in patients [25]. e reduction of surgical trauma can also effectively reduce the incidence of perioperative complications, shorten the time for patients to leave the bed and eat regularly, and thus reduce the length of hospital stay [26]. Currently, LRC has been widely accepted by most urologists as a minimally invasive treatment modality [27]. It varies depending on the different body structures of male and female patients; in addition to removal of the bladder and its surrounding adipose tissue and the distal end of the ureter, male patients should include the prostate and seminal vesicles. Female patients should have the uterus, adnexa, and part of the anterior vaginal wall removed [28]. If there is a possibility of bladder invasion of the urethra, the total urethra should be removed intraoperatively in combination. In younger male patients who require preservation of sexual function, intraoperative care should also be taken to protect the associated nerve and vascular tissues [29].
In our study, the operative time was longer in the LRC group than in the ORC group, and the LRC group had smaller postoperative hemoglobin (HGB) drop, postoperative bowel function recovery time, pelvic drain retention time, and postoperative hospital stay than the ORC group. Laparoscopic surgery is more difficult than open surgery. It Evidence-Based Complementary and Alternative Medicine requires clinicians with extensive operating experience, and numerous domestic and international studies have shown that the duration of laparoscopic surgery is longer than that of the open surgery, which is consistent with the results of this study [30]. anks to the innovation of laparoscopic instruments, the development of imaging technology, and the improvement of surgeons' experience, a comprehensive comparison of domestic and international literature show that the laparoscopic surgery time has been significantly reduced in recent years [31]. It is believed that with the continuous innovation of technology, standardization of laparoscopic surgery, and accumulation of experience, the operative time of LRC will be further reduced [32]. e intraoperative bleeding in the laparoscopic surgery group in this study was significantly lower than that in the open group, which may be attributed to the following reasons: laparoscopy can magnify the surgical field, anatomical structures are clearly visible, the location and course of blood vessels are easier to identify than in open surgery, stable pneumoperitoneal pressure can effectively inhibit venous vascular bleeding, the hemostatic effect of the ultrasonic knife is clear, and the deep dorsal penile vein complex can be effectively treated than in open surgery [33]. e results of statistical analysis of the postoperative complications in our study showed that the total incidence of postoperative complications in the LRC group was significantly lower than that in the ORC group. e total incidence of postoperative complications in different surgical groups was compared, which showed that the laparoscopic group had a significant advantage over the open group in terms of the incidence of postoperative complications, which may be due to the low invasiveness of laparoscopic surgery, small incision, less possibility of contamination. e reasons for this may be the low invasiveness of laparoscopic surgery, smaller incision, less possibility of contamination, and less impact on body tissues and organs. e results of postoperative histopathological examination in both groups showed that the patients in both groups were similar in terms of pathological stage, histological type, grading, lymph node metastasis and positive margins, and no significant differences were observed. Comparative analysis shows that laparoscopic surgery can also completely eradicate the tumor and effectively treat invasive bladder cancer [34]. Earlier studies on laparoscopic radical cystectomy reported a higher rate of positive surgical margins. In contrast, the current study confirmed that with the continuous improvement of laparoscopy and operator experience, the rate of positive margins is significantly lower, which is consistent with the results of this study [35]. e small sample size and a short follow-up period of our study have limitations.
erefore, a large sample of randomized studies and long-term follow-up are needed to evaluate the efficacy of laparoscopic radical cystectomy further. In recent years, the incidence of bladder cancer in China has been on the rise year by year. erefore, it remains the lifelong pursuit and goal of urologists to continuously innovate surgical techniques and improve surgical approaches, as well as to reduce the difficulty of surgical operations, improve the safety of treatment, seek effective pathways, minimize the recurrence rate of tumors, and increase the survival rate of patients. In the future, more advanced techniques and equipment will continue to emerge to alleviate the pain of bladder cancer patients further, reduce medical costs and burden, and improve the quality of life. It is believed that with the continuous updating of laparoscopic instruments, the standardization of laparoscopic surgery, and the accumulation of operator experience, LRC can better exert its superiority in the treatment of muscle-invasive bladder cancer and demonstrate better clinical efficacy.
In conclusion, the comparative study of elderly bladder cancer patients using LRC is significantly more effective than ORC, with less surgical trauma, less intraoperative bleeding, faster postoperative recovery, shorter hospital stay, and fewer postoperative complications, which provides some reference for clinical surgery for elderly bladder cancer patients.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-07-06T15:07:00.196Z | 2022-07-04T00:00:00.000 | {
"year": 2022,
"sha1": "5f5e801651bed898a9e5a2227722c0a44460d05e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ecam/2022/1681038.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb161d18f3c20ccc52fcee04506e122a28139aa3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248164854 | pes2o/s2orc | v3-fos-license | Estimation of the effects PM2.5, NO2, O3 pollutants on the health of Shahrekord residents based on AirQ+ software during (2012–2018)
Research objectives Intertwined with modern life, air pollution is not a new phenomenon. Air pollution imposes a significant number of deaths and disease complications on society, and therefore it is very important to determine the extent of its effects on health in any society. This study sought to evaluate the concentration and short-term and long-term excess mortality attributed to PM2.5, NO2 and O3 observed in Shahrekord. Procedure Hourly concentrations of PM2.5, O3, and NO2 measured at different stations of the Shahrekord Monitoring Network were obtained from the Shahrekord Department of Environment (DOE). Then, for different air quality monitoring stations, the average 24-hour PM2.5 concentration, the one-hour average NO2 concentration and the maximum 8-hour daily O3 concentration were calculated using Excel 2010. When the maximum 8-hour daily ozone level exceeds 35, it drops below 35 to calculate the SOMO35 index for modeling. Results The death rates of IHD, COPD, lung cancer and ALRI and stroke related to PM2.5 were 176, 7, 0, 10, 105, respectively. The effect of ozone on respiratory mortality was zero. During the study period in Shahrekord, no respiratory mortality was determined due to ozone and acute lower respiratory tract infection (ALRI). this study is first ever study on health effects of air pollution in shahrekord city Conclusion A significant number of deaths due to air pollutants in Shahrekord have been reported. It can be concluded that by designing and implementing strategies and measures to control air pollution, both health effects and economic losses are prevented.
Introduction
Intertwined with modern life, air pollution is not a new phenomenon. In fact, it is the result of urban emissions resulting from activities such as the production of goods, transportation, heating, recreation and human labor [5,30]. In addition to environmental degradation and recession, air pollution is one of the top 10 causes of death worldwide, with death rates ranging from 800,000 in 2000 to 1.3 million in 2010 [23]. The effects of air pollution on human health have a wide range, the most common of which include respiratory and cardiovascular diseases [4,20,32,33].
Natural processes absorb pollutants that cause air pollution to some extent and restore air quality. However, as the limitations are exceeded, pollutants accumulate in the environment and air quality deteriorates.
According to WHO statistics and data in 2006, more than 80% of urban residents are exposed to air quality levels that go beyond the WHO guidelines (WHO, [32]).
Numerous instruments of varying complexity are developed and used to estimate the impact of public health resulting from changes in air quality (which includes both premature deaths and related diseases and their associated economic value) [1,10,17,21,28]. Among these tools AirQ + software (WHO and US Environmental Protection Agency (EPA) and the Environmental Benefits Mapping and Analysis Program (Ben-MAP -CE), are widely used. The AirQ + model provided by the World Health Organization (WHO) is the most authoritative tool for assessing the adverse effects of exposure to air pollution on human health. The software uses data processed by Excel to estimate the relative risk of the accident and the attributable component, and displays the result as illness and mortality [8,12].
Because air pollution imposes a significant number of deaths and morbidities on society, it is important to determine the extent of its effects on health in any society. In addition to health-related goals, these results provide a rationale for lawmakers and officials to set new air pollution standards to increase funding for strategies and measures to reduce air pollution [19,22,29].
Management programs to control air pollution in large cities are considered as the most important strategies. They use accurate sources of data and information about environmental conditions to determine all the effects of air pollution on human health [24,2].
Assessing the effects of air pollution on health cannot only determine the adverse effects of air quality on public health. However, it will also be useful when considering the potential implementation of various air quality policies. Therefore, assessing potential health effects is a reliable point for public health and environmental professionals. The present study aims to evaluate the concentration and short-term and long-term health effects attributed to standard pollutants on residents of Shahrekord (Iran) from 2012 to 2018.
Study, course and estimation method
With an area of 70 square kilometers, Shahrekord is the center of Chaharmahal and Bakhtiari Province in Iran. The geographical coordinates of Shahrekord are 49º22 / E and 32º20 / N in the plains of Chaharmahal and Bakhtiari Province. This is an ecological study whose main purpose was to Assessment of the effects PM2.5, NO2, O3 pollutants on the health of Shahrekord residents based on AirQ + software during Sampling was performed from March 2012 to March 2018. The total population of Shahrekord in the first, second, third, fourth, fifth, sixth and seventh period was 82,450, 95,632, 132,450, 170,198, 222,198, 275,549, and 288,199, respectively. Due to paucity of comprehensive studies in Iran and the lack of sufficient information on the health consequences attributed to pollutants, new findings of meta-analysis studies obtained by AirQ + software in other countries were used in the present study, and this was one of its major limitations. The health consequences discussed included the following: natural mortality in the population over 30 years of age, mortality from (ALRI) (Acute Lower Respiratory Infections) in people under 5 years of age, (COPD) (Chronic Obstructive Pulmonary Disease), heart disease, lung cancer in the population over 30 years of age, and stroke and (IHD) (Ischemic Heart Disease), in the population over 25 years.
The Indicators used in this study are presented in Tables 1 and 2.
In this study, we tried to estimate the impact assessment criteria for all pollutants. However, due to the fact that in this software, the effects of carbon monoxide and sulfur dioxide pollutants could not be calculated from the standard pollutants, unfortunately, it has not been possible to estimate the health effects of these two pollutants.
Air quality data
This study was a time series and ecological research. Data on hospital admissions, total mortality, and mortality from cardiovascular and respiratory diseases from 2012 to 2018 (7 years) were collected on a daily basis from the main and referral hospitals in shahrekord and also from the health deputy of shahrekord University of Medical Sciences.
Daily concentrations of pollutants were collected on a daily basis from the Department of Environmental Protection(DOE) of chahar mahal and bakhtiari provinces for 7 years. These data included ozone, nitrogen dioxide and particles smaller than or equal to 2.5 µm that were measured in three monitoring stations installed in stations of the Jahad Square, Ostandari Square, and chaharmahal Square in shahrekord. In these stations, outdoor air pollutants and particles smaller than or equal to 2.5 µm were measured separately in different ways. To measure suspended particles, air is first pumped into the measuring devices. The device then measures the particle concentration based on the intensity of adsorption and records it every one hour. In this method, stations that have 75% of the complete data are selected according to Aphekom and WHO methods [10,28].
The location of sampling stations was selected based on crowded places in terms of vehicle traffic and population.
The method for determining the average concentration of each pollutant was as follows.
PM2.5: the 24-h average (24-h). NO 2 : the maximum average of 1 h. O 3 : the total ozone i.e., more than 35 ppb (SOMO35) [3]. For PM2.5, the 24-hour average for each station was first calculated and then, using the 24-hour average of different stations, the 24-hour average of the whole city was calculated. For NO2, in order to calculate the maximum of 1 h, the average was taken only from the days that had data for at least 18 h (0.75) and finally, the maximum hourly concentration for each day was selected and considered as the average of that day. To calculate SOMO35, an average concentration of a maximum of 8 h per day was calculated in ppb for all stations, which is attributable to SOMO35. That is, values below 35 ppb were not considered to have health effects. where SOMO35 uncorrected is uncorrected form of SOMO35. N total is the total number of days in a year, and N valid is the number of days for which valid data is available. Air pollution standards for PM2.5, NO2 and O3 are set by the World Health Organization at 10, 40 and 100 micrograms per cubic meter, respectively (average annual concentration) [35].
Demographic information
In Iran, population statistics are monitored and recorded daily by health centers under the auspices of medical universities, and thus it is the most reliable source of population statistics. The statistics used in this study were annually collected from health centers under the auspices of shahrekord University of Medical Sciences.
Baseline incidence
The rate of baseline incidence (BI) related to each health effect (number of basic health consequences per population (100,000)) for each city was obtained separately from the Deputy of Health of Shahrekord University of Medical Sciences and Ministry of Health.
BI is obtained according to the following formula: where B is the initial rate of beneficial health outcome per 100,000 people, and AP is the attributable ratio (AP) of the health effects of air pollution⋅
Relative risk (RRs)
In this study, due to limitations such as insufficient previous studies determining the RR index for the target areas, the default values of relative RR risk in the AirQ + model were used for this study. These values are obtained from meta-analysis studies [27]. In this study, RR values for the total mortality and respiratory mortality were 1.065 (1.04-1.083) and 1.014 (1.005-1.024), respectively. The rate of ALRI, COPD, lung cancer, IHD, and stroke were also estimated using IER.
The relative risk index for different concentrations was obtained according to the following formula [27].
X, X0 and β on average represent the concentration of a contaminant in the target city, the amount of cut and the change in RR for a unit of change in concentration X.
Statistical analyses
The concentrations of all three pollutants were analyzed separately in different years. Since the data had normal distribution and equal variance, Kolmogorov-Smirnov and Levene test (equality of variance and normal distribution) and Kruskal-Wallis test (inequality of variance) were used.
Results and discussion
In this study, the health effects attributed to PM2.5 and NO2 for concentrations higher than those set by WHO guidelines and those attributed to O3 concentrations above 35 ppb were estimated by AirQ + software.
To interpret the results correctly, it should be noted that the health effects attributed to PM2.5 and NO2 were calculated only for concentrations higher than the WHO guidelines. In addition, the health effects of O3 were estimated for concentrations above 35 ppb.
The mean seven-year concentrations of PM2.5, NO2 and O3 were 49.94 (287.48%) and 59.42, respectively. SOMO35 ozone levels were zero in all years of the study period.
The annual average and standard deviation of pollutants are shown in Fig. 1.
The The reason for the lack of effect of ozone on the health of the inhabitants of this area was the low amount of ozone (less than the allowable limit) in the air of Shahrekord, which had no adverse effect.
In this study, it was found that the trend of pollutant concentrations in Shahrekord during the study period was irregular and did not follow a specific rule.
Dastoorpoor et al. [6] examined cardiovascular mortality attributable to ambient air pollutants in Ahvaz, Iran, and found that the average daily concentrations of ozone and nitrogen dioxide in the period 2008-2015 were 62 (31.07 ppb) and 44.20 micrograms per cubic meter [6]. Tables 3 and 4 attributable ratio (AP) and number of natural and respiratory deaths due to short-term contact and Tables 5 and 6 attributable ratio (AP) and number of natural and respiratory deaths due to long-term contact with PM2.5, NO2 and O3. The attribution ratio represents the percentage of health outcomes in a population due to a given pollutant. In the case of natural death, the highest and lowest natural mortality were related to NO 2 and O 3 , respectively. In general, no ozone- Table 2 The health outcomes and baseline incidence during the study period. related health effects have been identified, and with this ozone concentration, there is no concern about the occurrence of health-related consequences. The highest long-term health effects of PM2.5 and NO2 were observed in the third and sixth years, respectively. On average, natural mortality in Shahrekord due to short-term exposure to PM2.5 in the first, second, third, fourth, fourth and fifth years was 12%, 7.7%, 13.8%, 4.8% and 12.28% respectively. In the sixth and seventh years, there were no natural deaths due to short-term exposure. These findings are slightly different from previous studies.
For example, Hopke et al. reported that about 3.60-5.02% of natural deaths in Ahvaz were due to exposure to PM2.5 [18].
The total number of natural mortality due to long-term contact with PM2.5 was 1278 and short-term contact was 95. The number of natural mortality due to long-term exposure to NO2 was 1531 and short-term exposure was 100. NO 2 had a worse status than PM2.5 and was identified as a responsible pollutant. No deaths were observed due to ozone.
In a study conducted by Naddafi et al. in Tehran (Iran) entitled "Determining the effects of air pollutants in Tehran in 2012 on health", the results showed that the largest share of health effects attributed to particulate matter was 2.5 and 10 µm [25]. While during the study period, the most effects were attributed to NO 2 .
In a study in Ahvaz, Iran AP and the total number of respiratory deaths due to ozone exposure were determined to be 6.17% and 173, respectively, which is higher than the present study. This difference may be due to a numerical difference in geographical distribution, population at risk (recording the population census, taking into account growth Table 4 Attributable cases due to short-term exposure to PM 2.5 , NO 2 , and O 3 during the study period. rate), or a difference in the average daily recording of pollutants. The death rates of IHD, COPD, lung cancer and ALRI and stroke related to PM2.5 were 176, 7, 0, 10, 105, respectively. The effect of ozone on respiratory mortality was zero. During the study period in Shahrekord, no respiratory mortality due to ozone and acute lower respiratory tract infection (ALRI) were determined.
No studies have been performed to evaluate the health effects of short and long-term exposure to gaseous pollutants in Shahrekord. This study is investigate only, the health effects of pollutants in Shahrekord.
Air pollution imposes direct and indirect costs on countries. The World Bank Group report shows that the health effects of ambient air pollution in Iran lead to a decrease in total welfare and GDP of $ 30.6 million and 2.48% in 2013, respectively. In addition, the total lost labor force production and its share in Iran's GDP was $ 1471 million and 0.12%, respectively [31]. Therefore, air pollution in Shahrekord is considered as a serious problem and requires the attention of policy makers to take preventive and control measures in this regard. By designing and implementing strategies and measures to control air pollution, including continuous monitoring of air pollution indicators and identifying influential factors, we can prevent both health effects and economic losses caused by air pollution. Since motor vehicles and dust storms in the Middle East are the main source of air pollution in Shahrekord, to reduce the effects of these pollutions in Shahrekord, program development and cooperation between organizations and even neighboring countries as well as improvements in motor vehicles will definitely lead to significant effects.The results of this study will be useful if accompanied by political and economic regulations.
The strengths of this study were: First, all data related to air pollutants and the baseline incidence have been collected from government and trusted organizations from the center in Chaharmahal and Bakhtiari Province (Shahrekord) and the Ministry of Health, which has led to a relatively large volume of data. Secondly, compared to previous studies, given the length of the 7-year period and the large amount of data, it allowed us to examine communications at a high level of reliability and reliability. Third, the inclusion of 4 air pollution monitoring sites has provided a basis for better demonstration of the effects of air pollution compared to other studies.
However, this study has its limitations. Our research has some limitations. Similar to other studies, the effects and interplay between air pollutants and health (mortality and hospital admission) requires interpretation -the effect might vary for each region. Secondly, fixed monitoring stations in specific urban locations represent the total exposure of people to pollutants for all residents of the area. This generalization may not always be true and represent the total contact of the population with the pollutant. On the other hand, exposure to pollutants depends on many conditions, such as indoor and outdoor activities, residence, occupational exposure, and so on.
Third, other potential contributing factors, such as BMI, education and income, smoking, physical activity, and medical history, are not included in this study, which may have a potential impact on the relationship between air pollutants and hospitalization, mortality etc.
Fourth, the relative risks (RR) used to estimate the effects of pollutants on health are based on program defaults and studies in other countries. Finally, the main limitation of this study is its ecological nature, which does not allow us to control potentially disruptive factors at the individual level, such as respiratory habits, age, diet, existing or genetic diseases, socioeconomic status etc.
Conclusion
We investigated the short-term and long-term health effects associated with exposure to PM2.5, NO2 and O3 in Shahrekord over a sevenyear period using the AirQ + model. Contaminant concentrations were often higher than WHO guidelines. The short-term effects of PM2.5 on health were greater than those of NO2. However, the long-term effects of NO2 were greater than PM2.5 and overall, the effects of NO2 were greater than PM2.5. The health effects of ozone were zero. In general, all mortality is estimated to be due to PM2.5 and NO2.
The total number of natural deaths due to PM2.5 and NO2 in all studied years was 1074 and 1631, respectively. The total number of deaths from IHD, COPD, lung cancer, ALRI and stroke attributed to PM2.5 were 176, 7, 10, 0, 105, respectively. Other studies have shown that the health effects of air pollution impose direct and indirect costs on countries. Therefore, by designing and implementing strategies and measures to control air pollution, we might be able to curb both health effects and economic losses. Policymakers and experts in the field should focus on reducing air pollution.
Therefore, due to the worrying situation of pollutants in Shahrekord, the relevant authorities are advised to take preventive and corrective measures to reduce or eliminate pollutants while conducting the necessary research in this field.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Table 6
Attributable cases due to short-term exposure to PM 2.5 , NO 2 , and O 3 during the study period. | 2022-04-15T15:19:40.092Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "289aaad6d54786bff318452c25b573e8f856d917",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.toxrep.2022.03.045",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4026d024d3782dcc864bfd7fdb7d0ce5b6b0433e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
229013115 | pes2o/s2orc | v3-fos-license | Does Brand Awareness Influences Purchase Intention? The Mediation Role of Brand Equity Dimensions
This paper aims to identify the antecedent role of brand awareness in other dimensions of consumer-based brand equity (CBBE) and its impact on purchase intention. It is a quantitative study based on a survey conducted with 622 smartphone users. The theoretical hypothesis test was performed by structural equation modeling (PLS-SEM) and ordinary least squares (OLS) regression to analyze the mediation effect. The results demonstrate that brand awareness does not directly impact purchase intention. This effect is only observed when it is mediated by the three dimensions of CBBE perceived quality, brand associations, and brand loyalty. This investigation makes two major contributions. First, it demonstrates that knowing a brand is not enough to generate consumers’ purchase intent. Second, it uses the mediating effect of the other dimensions of CBBE (associations, loyalty, and perceived quality) to demonstrate that brand awareness acts as a first step in building brand value for consumers.
INTRODUCTION
Companies' dedication to build a strong and competitive brand in the perspective of consumers has become one of the key priority factors in the organizational environment (Christodoulides, Cadogan, & Veloutsou, 2015). This is due to the important role of the brand in consumer decision making (Aaker, 1996), which makes brand management necessary to bring better performance to organizations and develop advantages over their competitors (Boicu, Cruz, & Karamanos, 2015). The relevance of brand studies has brought the interest of the academic and professional community in the discussions about its value from the consumer's perspective. The so-called consumer-based brand equity (CBBE) has been studied and is defined as the set of assets that brand name and symbol hold in relation to a product (Keller, 1993;Aaker, 1996).
Due to the complexity and subjectivity surrounding the perception of brand equity, developing a CBBE conceptualization and measurement, with its formative dimensions and expected outcomes, is a challenging task (Christodoulides et al., 2015). Over the years, different dimensions of CBBE were identified and discussed (Christodoulides & Chernatony, 2010;Veloutsou & Guzman, 2017). Among the multiplicity of conceptualizations developed over the years, Aaker's model (1996) is highlighted as the most adopted one (Vieira, Sincorá, Pelissari, & Carneiro, 2018). This author has identified that brand equity is comprised of brand loyalty, perceived quality, brand awareness and brand associations. Through these dimensions, brand equity would be able to deliver greater value to the company through increased prices and margins, competitive advantage, and greater consumer buying intent (Aaker, 1996).
However, as a multidimensional construct, it is important to analyze the effects and impacts of each of the dimensions of brand equity and how they relate to each other (Su, 2016). Among the dimensions, brand awareness can be considered the most neglected and the one with greatest possibility of discussion and divergence of opinions (Romaniuk, Wight, & Faulkner, 2017). This construct is conceptualized as the degree to which consumers are aware that a brand is part of a product category (Assael & Day, 1968). Previous studies argued that brand awareness has a positive and direct impact on purchase intention (Keller, 1993;Wu & Ho, 2014;Akkucuk & Esmaeili, 2016). However, the emergence and growth of new brands in the recent years may show that simply being aware of the brand does not indicate a positive or negative perception. This might only be the first step towards generating attitudes and behaviors regarding the brand (Su, 2016). Therefore, brand awareness itself might not be enough to increase the consumers' purchase intent towards unknown brands. On the other hand, this construct may allow other positive consumer relationships with the brand to appear, such as perceived quality, brand loyalty, and brand associations Pappu, & Quester, 2016;Foroudi et al., 2018) and thus generate the purchase intention. Therefore, this study aimed to analyze the antecedent role of brand awareness in the other dimensions of consumer-based brand equity and its impact on consumer purchase intention.
For this purpose, we chose to study smartphone brands. This choice is justified by the fact that this product is one of the most used today by the world population (Statista, 2019a). In addition, smartphones have an increasing importance in the global market and may have different characteristics, presenting a medium replacement cycle and low, medium, or high cost depending on product specifications. (Kim, Chun, & Lee, 2014;Jyothsna, Mahalakshmi, & Sandeep, 2016). It is noteworthy that among the top 10 brands in the world, six are linked to the technology segment and four develop smartphones (Interbrand, 2019).
671
In order to fulfill the objective, we conducted a survey with 622 smartphone users. The data was analyzed using structural equation modeling (PLS-SEM) and ordinary least squares (OLS) regression by Macro Process. The results provide evidence showing that brand awareness exerts a predecessor role on other brand equity dimensions: brand loyalty, brand associations, and perceived quality. In turn, these dimensions mediate the relationship between brand awareness and purchase intention.
This study is relevant due to the divergent discussions concerning the relationship between the dimensions of CBBE. Over the years, it is common to identify previous research that allocates these four dimensions of CBBE linearly and independently by each other in consumer behavior (Cobb-Walgren, Ruble, & Donthu, 1995;Hanzaee & Asadollahi, 2012). However, there are still open spaces for discussion about the role of these dimensions and their relationships (Severi & Ling, 2013). Moreover, it is intended to contribute to the business community by bringing a new perspective of brand awareness and showing that this variable can be used as a precursor of others to build a strong brand value from the consumer's perspective.
CONSUMER-BASED BRAND EQUITY
CBBE can be defined as the set of assets linked to the brand name and symbol that generates value for a product/service delivered to the consumer (Aaker, 1996), or as the differential effect of brand awareness on consumer response to brand marketing strategies (Keller, 1993). According to Aaker (1996), brand equity is a multidimensional construct composed of four dimensions, namely: brand awareness, perceived quality, brand associations, and brand loyalty.
In previous studies, CBBE indicated to have an important role for consumer's buying decisionmaking process, especially in the stages of searching for information and evaluating the alternatives (Jung & Shen, 2011;Calvo-Porral et al., 2015;Akkucuk & Esmaeili, 2016;Sharma et al., 2015). Brands with greater value decrease consumers' time and research cost, therefore, reducing the effort to make a good product choice and the risk (Aaker, 1996). Jyothsna, et al. (2016) suggest that brand equity plays an important role in shaping consumer buying intent, and it makes consumers have the brand as one of their first buying options. Calvo-Porral et al. (2015) suggest the need for managers to consider each of the dimensions of CBBE when developing the marketing strategies of organizations. On the other hand, if CBBE is considered a multidimensional concept (Aaker, 1996), it is necessary to analyze each of its dimensions described below.
Brand associations: an important ingredient of brand perception which occurs when the consumer thinks about a brand and develops some type of association linked to the memory that one has about it (Michel & Donthu, 2014). These associations may include product attributes, lifestyle, personality, or symbols (Yoo & Donthu, 2001). It is a type of mechanism that helps the consumer to remember the brand faster. Thus, the greater the experience with the brand, the greater the strength of the associations (Aaker, 1996).
Brand awareness: can be defined as the strength that the brand has in the consumer's mind (Aaker, 1996). Brand awareness involves two main elements: recall and recognition (Keller & Lehmann, 2006). It is possible to make an analogy of this concept with advertising posters. If consumers' minds had multiple posters, each one referring to a brand, awareness would be based on the size of the posters. Thus, the larger the poster, the greater the awareness of that brand. Therefore, it refers to the consumer's ability to remember the brand as part of a certain product category (Huang & Sarigölü, 2014;Da Costa, Patriotra, & Angelo, 2017).
Perceived quality: it is defined as the consumer's knowledge of the overall quality or superiority of a brand when comparing it with others (Aaker, 1996). This construct is considered high or low according to the intangible perception of the consumer (Yoo & Donthu, 2001). For Desai, Kalra and Murthi (2008), perceived quality refers to the consumer's knowledge about what he/ she sees and feels when looking and/or touching a product of a certain brand.
Brand loyalty: is one of CBBE's main assets. It is the measure of the link between the consumer and the brand, and the likelihood that the customer may change brands when the brand undergoes a price or product change (Aaker, 1996). This dimension is also defined as a positive consumer behavioral or emotional response to a brand (Pedeliento et al., 2016).
THE RELATION BETWEEN BRAND AWARENESS, CBBE DIMENSIONS, AND PURCHASE INTENTION
Several new brands in market are emerging and competing equally with already established brands (Pullig, Simmons, & Netemeyer, 2006). In this situation, consumers' knowledge and awareness regarding the existence of the brand in a product category is not always a strong enough reason to directly affect purchase intention (Burnett & Hutton, 2007). This may also be linked to technological advancement and high variations of prices and tools depending on the product model being offered. Therefore, it leads the consumer to pay more attention to these attributes than whether the brand is known or not (Wu & Ho, 2014).
• H1: There is no direct positive relationship between brand awareness and purchase intention.
On the other hand, the fact that the brand is known opens a range of opportunities for consumers to develop positive behaviors and attitudes, like the other dimensions of CBBE: quality perception, brand associations, and brand loyalty Pappu, & Quester, 2016;Foroudi et al., 2018).
When consumers are more aware of a brand, they are more confident and able to become loyal to that brand, whereas brands with a low level of awareness may find it harder to penetrate the market (Keller, 1993). Brand recognition is seen as a precursor to brand loyalty (Keller, 1993;Pappu, & Quester, 2016). Authors who developed empirical studies indicated a positive relationship between brand awareness and brand loyalty in different industries, as cosmetics (Chinomona & Maziriri, 2017), hospitality (Xu, Li, & Zhou, 2015), and smartphones (Jing, Pitsaphol, & Shabbir, 2014).
• H2a: There is a positive relationship between brand awareness and brand loyalty.
In turn, increased brand loyalty makes consumers more likely to buy the products, as well as it creates the ability to repurchase and increase positive word-of-mouth (Foroudi et al., 2018). In addition, they also create the possibility of increasing sales volume, attracting new consumers, and providing commercial leverage through distribution channels, which opt for the security of brands that have loyal customers (Ranjbariyan, Shahin, & Jafari, 2012). Other empirical studies have also validated the positive impact of brand loyalty on consumer purchase intent, stating that loyal buyers tend to refer the brand to others and continue to buy the branded products even if the price is higher than competitors (Porral et al., 2015;Kim & Kim, 2005;Akkucuk & Esmaeili, 2016).
673
• H2b: There is a positive relationship between brand loyalty and purchase intention.
If consumers have heard of a brand at some point, and have had even indirect experiences with it, the possibilities of generating brand associations emerge (Chan, Boksem & Smidts, 2018). Thus, after consumers are aware of the brand, some images and perceptions about the brand can rise in consumers' mind (Tariq, Abbas, Abrar, & Iqbal, 2017). According to Shafiri (2014), brand awareness has a direct link with cognitive thinking and cognition, that can be considered dimensions of brand associations. Pitta & Katsanis (1995) argue that brand awareness allows brand and product associations to be built and incorporated into consumer memory. Following this, there is evidence of the connection between awareness and brand associations in which the former precedes the latter (Keller, 1993;Dew & Kwon, 2010;Foroudi et al., 2018).
• H3a: There is a positive relationship between brand awareness and brand associations.
In turn, brand associations may have a significant impact on consumer buying behavior (French & Smith, 2013), since associations generate value in different ways, such as helping to process and find information, establishing brand differentiation and positioning, and creating positive feelings about the brand (Dew & Kwon, 2010;Jyothsna et al., 2016). Paço, Rodrigues, and Rodrigues (2015) argue that some specific positive dimensions of brand association, as utility and affect, impact the consumer purchase intention.
• H3b: There is a positive relationship between brand associations and purchase intention.
Regarding perceived quality, previous studies argue that consumers prefer to buy products from familiar and known brands , as they believe that the products will have higher quality, thus having lower risk in their purchase (Desai, Kalra & Murthi, 2008;Das, 2015;Calvo-Porral & Lévy-Mangin, 2017). Authors have tested the relationship between perceived quality and brand awareness in different contexts and identified that consumer perception of the brand improves as he/she already has some familiarity with it Chi, Yeh, & Yang, 2009;Severi & Ling, 2013).
• H4a: There is a positive relationship between brand awareness and perceived quality.
Quality perception enables consumers to reduce their uncertainty in decision making. The fact that one brand has higher quality than others makes the purchase risk lower and it increases the expectation of satisfaction when using the product (Calvo-Porral & Lévy-Mangin, 2017). In addition, perceived quality also allows organizations to make use of premium pricing, that is, they apply a higher price in relation to the market without having a disadvantage in competing with competitors (Kim & Kim, 2005). Also, higher perception of quality is related to a positive effect on brand value (Wang, 2017). Thus, it might improve consumers purchase intention (Petrick, 2004).
• H4b: There is a positive relationship between perceived quality and purchase intention.
Data ColleCtion anD Sample
Given the proposed hypotheses, the methodological strategy used was quantitative through a survey application. For this purpose, a questionnaire with closed-ended questions was used to identify the characteristics and opinions of the studied population. The sample chosen consisted of undergraduate students. Although this choice limits the development of generalized conclusions, the profile of college students matches with the age range of the main smartphone users (Statista, 2019b). In addition, previous CBBE studies have also used this type of sample (Yoo & Donthu, 2001;Atilgan, Aksoy, & Akinci, 2005;Hanzaee & Assadollahi, 2012;Jyothsna et al., 2016).
The minimum sample size was defined according to Hair Jr. et al. (2017), which suggests verification through statistical power. The analysis was then aided by the G * Power software, in which two parameters were used: the test power (Power = 1 -β error prob. II) and the effect size (f²). The calculation also considered the construct that had the largest number of predictors which is, in the case of the present model, the purchase intention with four pointed arrows (Hair Jr. et al., 2017). Thus, the software indicated that the use of a sample of 85 cases would already reach the statistical power of the test of 80.30%. After the questionnaire was elaborated, it was sent to five academic professionals with experience in the area in order to acquire suggestions for improvement. Thus, based on their guidelines, some adjustments were made and after that, a pretest was applied with 28 students. The aim was to analyze the questionnaire applicability regarding the understanding of it and the way it was built, as well as to preliminary check the behavior of the relationships between the variables based on the small sample. The results obtained in the pretest were considered satisfactory, which allowed the field application of the survey.
The questionnaire was developed using the SurveyMonkey online tool and emailed to all students enrolled in university undergraduate courses. It is worth emphasizing that all questions were asked based on user experience regarding the brand of their current smartphone. The survey obtained a total of 720 responses. After collection, suspicious response patterns were found, characterized by Hair Jr. et al. (2017) as the phenomenon that occurs when the respondent marks the same scale item for a high proportion of questionnaire questions. In most cases, it is recommended to exclude responses that present this type of pattern (Hair Jr. et al., 2017). Therefore, 58 questionnaires were eliminated, resulting in 662 valid cases.
A considerable part of the sample (63%) is composed of young people aged 16 to 22 years old, followed by respondents aged 23 to 29 years old (25%). This data is coherent in relation to the population under analysis, which are undergraduate students. It can be observed that the age range of the sample is aligned with the object of study chosen for this research, considering that smartphones are mostly used by young people in their 20s (Statista, 2019b). Regarding the gender of the respondents, the distribution was roughly the same, however, females obtained a small majority of completed questionnaires representing 51% of the total sample (n = 340). All brands that composed the sample also maintained approximately the same distribution between genders.
Respondents were also asked about the smartphone brands used. The sample was concentrated in five main brands: Motorola (29% of the total), Samsung (28% of the total), Apple (16% of the total), Asus (7% of the total) and LG (6% of the total). The remaining 13% of the sample use brands such as Lenovo, Xiaomi, Nokia and Sony. When asked which brands they would like to choose in their future smartphone purchase, the results showed that respondents focused on the Motorola (27%), Apple (26%) and Samsung (24%) brands. It is noteworthy that 33% of the sample indicated a preference for a different brand from the current one used in case of future purchase, which demonstrates that they are likely to change the brand of their smartphone.
meaSureS
The measurement used in this questionnaire was five-point Likert scale ranging from "strongly disagree" to "strongly agree". The operationalization of each variable is based on available instruments from prior relevant literature. The constructs perceived quality, brand loyalty, and brand associations were adapted from Yoo and Donthu (2001) that aimed to develop a multidimensional CBBE scale. Brand awareness operationalization was adapted from Yoo and Dontu (2001) and . And purchase intention was measured using an adapted scale from Grewal, Monroe, & Krishnan (1998). The items of each variable were presented in Appendix A.
Data analySiS
Data analysis was divided into two steps. The first was the application of the Structural Equation Modeling (PLS-SEM) technique through SmartPLS 3.0 to validate the measurement and structural model. This technique examines relationships using a set of methods to identify and analyze multiple dependency relationships between variables through a path diagram (Hair Jr. et al., 2017). The steps used to validate the measurement and structural model were based on Hair Jr. et al. (2017), which establish the criteria for determining internal consistency, convergent and discriminant validity, significance, and collinearity.
The second stage involved the analysis of mediations through the Macro Process model, which was employed according to Hayes (2018) parameters. The hypotheses developed in this study aimed to analyze the indirect path of relationships through mediations, and ordinary least squares (OLS) regression analysis, which is routinely used for this purposjustifying its application (Hayes, 2018). Moreover, through the Macro Process it is possible to analyze the whole model by the aggregate sum of its parts, unlike PLS-SEM, allowing better inferences for theory construction (Hayes, Montoya, & Rockwood, 2017). For these reasons, it was considered relevant to use PLS-SEM for the validation of the measurement and structural model, and the use of OLS via Macro Process to analyze the total effect of mediation.
moDel ValiDation
In order to evaluate the measurement model, we used a PLS Algorithm software tool named SmartPLS 3.0, it was applied to valid sample composed of 662 answers. When performing the calculations, the model converged with 07 interactions, a value lower than the recommended maximum of 300 interactions, thus meeting the convergence requirements of the algorithm (Hair Jr. et al., 2017).
The first criterion analyzed was internal consistency, which uses Cronbach's alpha values and composite reliability as parameters for validation. As recommended by Hair Jr. et al. (2017), all constructs had Cronbach's alpha above 0.708 and composite reliability below 0.95. However, some indicators showed below-recommended external load values (<0.708) and the indicator confidence values below the minimum (<0.5). Therefore, the indicators correspondent to those values were excluded from the analysis, namely AW05, AS01 and AS02 (one indicator of the Brand Awareness construct and two of the Brand Associations construct). The exclusion of these indicators is justified by the positive impact on the construct validity by checking the stroke and composite reliability indices, as indicated by Hair Jr. et al. (2017). In addition, all constructs met the discriminant validity criteria as indicated in Fornell-Lacker's test results (Table 1).
After validating the measurement model with satisfactory quality levels, the next step was to analyze the structural model. This phase involves examining the model's predictive capabilities and the relationships between latent variables. The steps suggested by Hair Jr. et al. (2017) were used to evaluate the structural model. These steps consisted of performing the model collinearity tests, path coefficients significance, R² value level, f² effect size, predictive relevance (Q²) and q² effect size, as well as the validation of the measurement model. All of the structural model tests were performed using SmartPLS 3.0 software.
677
The collinearity analysis of the structural model was performed using the variance inflation factor (VIF) values. The endogenous latent variables of the model presented VIF values lower than 5.0 as indicated by Hair Jr. et al. (2017) as acceptable. This shows that respondents understood the constructs as phenomena different from each other.
The second stage of the analysis consisted of evaluating the significance and relevance of the path coefficients of the structural model. The relationships between Associations (AS), Loyalty (LO) and Perceived Quality (PQ) constructs with Purchase Intent (PI) construct showed a relevant level of significance (1%). The relationship between Consciousness (AW) and Purchase Intention (PI) constructs was not significant, as presented in Table 2. When analyzing the value of the path coefficient of this relationship, it has a negative value close to zero. On the other hand, the effect of brand awareness as antecedent of the variables: associations, perceived quality and loyalty was significant. Following the validation steps of the structural model, the coefficient of determination (R²) was evaluated. According to the criteria established by Hair Jr. et al. (2017) for research in the area of consumer behavior, the R² found for the purchase intention construct (R² = 0.625) can be considered high. The fourth stage of model analysis sought to evaluate the effect size f², which measures the impact of the exogenous latent variable on the endogenous. In the analysis of the relationship of the constructs with the purchase intention variable, the results showed a small effect on brand associations (f² AS → PI = 0.104) and a moderate effect on perceived quality (f² PQ → PI = 0.251) and loyalty (f² LO → PI = 0.233). As expected, there was no effect on the brand awareness (f² AW → PI = 0.000), since the relationship was not significant. By analyzing the relationship of brand awareness as predecessor to the other dimensions of brand equity, it can be observed that the effect based on the value of f² was large for brand associations (f² AW → AS = 0.352) and perceived quality (f² AW → PQ = 0.300), and it was moderated for loyalty (f² AW → LO = 0.211).
Finally, the fifth step was the analysis of the predictive relevance of the model (Q²), which was performed using the blindfolding procedure. This procedure is used to evaluate the ability of exogenous variables to predict the endogenous variable. The result obtained was a value above zero (Q² = 0.422) which supports the predictive relevance of the model to the endogenous construct. In addition, the relative impact of q² of exogenous constructs on the endogenous construct was also evaluated. The constructs brand associations (q² = 0.163), perceived quality (q² = 0.154), and loyalty (q² = 0.108) indicated a moderate predictive relevance to purchase intention. Furthermore, as expected, the brand awareness (q² = 0.000) did not point to direct predictive relevance to purchase intention. Figure 2 shows the path coefficients and the significance of relationships between model variables. It also presents the indicators that were maintained after all validation criteria of the measurement model.
multiple meDiation analySiS
The mediating effects were also verified through the indirect relationship between brand awareness and purchase intention variables (Table 3). As previously described, the analysis of this step was done by regression analysis (OLS) of the Macro Process with 10.000 subsamples. The results demonstrate that the direct relationship between brand awareness and purchase intent is not statistically significant at the 95% confidence level. The indirect relationship of these variables, on the other hand, obtained statistical significance through the constructs perceived quality, brand associations, and brand loyalty. Thus, the existence of total mediation is proven, indicating that the effect of brand awareness on purchase intent (0.6323) is only significant and positive in indirect form, being mediated by the variables perceived quality, brand loyalty and associations with brand.
DISCUSSION
Understanding consumer behavior for a given product and brand represents a complex task that involves different variables. This study investigated the antecedent role of brand awareness in the other dimensions of CBBE and its impact on consumer purchase intention. The findings suggest that brand awareness does not directly impact purchase intention. On the other hand, the relation between these variables is indirectly mediated by CBBE dimensions: perceived quality, loyalty, and brand associations.
These findings may be connected to the fact that brand awareness is only the first step towards consumer perception of other aspects (Su, 2016). Thus, the fact that a brand is more famous and well-known to people may not be a strong enough reason to influence the decision-making of technology products such as smartphones, which are characterized by high price and tool variation (Wu & Ho, 2014). In addition, the increased use of online medias to search information and reviews about brands and products, makes a shorter distance between less and better-known brands (Kudeshia & Kumar, 2017).
On the other hand, brand awareness has an indirect impact on purchase intent, being mediated by perceived quality, brand associations and brand loyalty. Some brand equity models (e.g. Cobb-Walgren et al., 1995;Hanzaee & Asadollahi, 2012) present the dimensions in a linearly and independently way, neglecting the relationship between these dimensions. The rationale behind the antecedent role of brand awareness is clear. The fact that a brand starts to be recognized by the consumer opens a range of possibilities such as the creation of brand associations, perception of quality, and loyalty (Pappu, & Quester, 2016;Foroudi et al., 2018) and it can increase his willingness to buy a product of this brand.
Studies, such as Yoo and Donthu (2001), argue that brand awareness and associations should be combined according to their results. However, both variables have different concepts and different consumer behaviors emerge. Brand awareness acts as a variable that enables consumers to have associations with a brand, which would not be possible without this prior knowledge and familiarity (Foroudi et al., 2018).
In addition, the results also demonstrated a positive relationship between awareness and perceived quality, which is aligned with some previous studies Chi, Yeh, & Yang, 2009;Severi & Ling, 2013). This fact demonstrates that familiar and well-known brands generate a perception of quality in the consumers, who, in turn, choose to buy products of these brands because they believe they will have a lower risk associated with the purchase. The mediation role of brand loyalty is also aligned with previous studies (Keller, 1993;Pappu & Quester, 2016). Brand awareness can contribute to greater market penetration and enable the generation of consumer loyalty.
Given the above, it is understood that brand awareness can still be considered a relevant variable for brand management, even it does not have a direct relationship with purchase intention. Furthermore, the antecedent role of brand awareness in the other dimensions of brand equity demonstrated in this paper, can generate discussions about previously developed models (e.g. Hanzaee & Asadollahi, 2012) that adapt this variable in the same position as the others.
theoretiCal anD praCtiCal impliCationS
From a theoretical point of view, the study contributed integrating the variables that together constitute brand equity with one of its main expected consequences. Some research found in the literature (Cobb-Walgren et al., 1995;Chen & Chang, 2008) sought to analyze the relationship of brand equity and purchase intention using CBBE as just a single construct. This research provided an understanding of such variables through empirical study. In addition, as the main theoretical contribution, this study presents brand awareness as a predecessor of the other dimensions of CBBE. It is emphasized, according to the results, that this construct will only indirectly impact the intention to purchase, something that differs from previous studies (Keller, 1993;Malik, 2013;Wu & Ho, 2014;Akkucuk & Esmaeili, 2016) that did not control the effect of the other dimensions of CBBE, which may explain the divergence of results.
The practical contribution of the research is a greater understanding of consumer behavior and the factors that impact brand users' attitudes. The results allow companies to trace strategies developed for brands, besides the analysis of decision-making through metrics linked to the studied variables, such as brand awareness.
limitationS anD Future reSearCh
This study has some limitations enabling challenges for future research. The first concerns the population chosen to be part of this study. Although the sample of students is consistent with the profile of most users of smartphones, the concentration of responses in this type of respondent does not allow the generalization of findings to people with other characteristics, which opens the possibility of future studies that broaden the population and the category of products explored.
Secondly, participants' responses were always based on the smartphone brands they currently used. Consequently, the questionnaire was answered based on the user's past and present experiences. Therefore, future surveys using less well-known predefined brands may contribute to the findings of this study.
Thirdly, brand awareness includes both recognition and recall (Keller, 1993). In our study, we used brand awareness as a general construct, not analyzing recall and recognition separately. Future research can include the test of these two dimensions on the model to understand how each one affects brand equity dimensions. Thus, it will be possible to verify if the strength of the relationship is higher in any of the two -recognition and recall. | 2020-11-05T09:05:56.240Z | 2020-09-21T00:00:00.000 | {
"year": 2020,
"sha1": "63c2c0016cb39eb955d011ce4e2dc672017843e3",
"oa_license": null,
"oa_url": "https://doi.org/10.15728/bbr.2020.17.6.4",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "095f3b10fc3993f09219c59185e63f8c2cf33a7b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
30247148 | pes2o/s2orc | v3-fos-license | Improved filters for gravitational waves from inspiralling compact binaries
The order of the post-Newtonian expansion needed, to extract in a reliable and accurate manner the fully general relativistic gravitational wave signal from inspiralling compact binaries, is explored. A class of approximate wave forms, called P-approximants, is constructed based on the following two inputs: (a) The introduction of two new energy-type and flux-type functions e(v) and f(v), respectively, (b) the systematic use of Pade approximation for constructing successive approximants of e(v) and f(v). The new P-approximants are not only more effectual (larger overlaps) and more faithful (smaller biases) than the standard Taylor approximants, but also converge faster and monotonically. The presently available O(v/c)^5-accurate post-Newtonian results can be used to construct P-approximate wave forms that provide overlaps with the exact wave form larger than 96.5% implying that more than 90% of potential events can be detected with the aid of P-approximants as opposed to a mere 10-15 % that would be detectable using standard post-Newtonian approximants.
I. INTRODUCTION AND METHODOLOGY
Inspiralling compact binaries consisting of neutron stars and/or black holes are among the most promising candidate sources for interferometric detectors of gravitational waves such as LIGO and VIRGO. The inspiral wave form enters the detector bandwidth during the last few minutes of evolution of the binary. Since the wave form can, in principle, be calculated accurately, it should be possible to track the signal phase and hence enhance the signal-to-noise ratio by integrating the signal for the time during which the signal lasts in the detector band. This is achieved by filtering the detector output with a template which is a copy of the expected signal. Since in general relativity the two-body problem has not been solved the exact shape of the binary wave form is not known and experimenters intend to use as a template an approximate wave form computed perturbatively with the aid of a post-Newtonian expansion [1][2][3][4][5][6][7][8][9][10][11]. Thus, template wave forms used in detection will be different from the actual signal that may be present in the detector output. As a result the overlap of template and signal wave forms would be less than what one would expect if they had exactly matched.
In this paper we explore the order of the post-Newtonian expansion needed to extract in a reliable and accurate manner the actual, fully general relativistic signal. Previous attacks on this problem [2,3,[11][12][13][14] suggested that a very high post-Newtonian order (maybe as high as v 9 /c 9 beyond the leading approximation) might be needed for a reasonably accurate signal extraction [15]. Our conclusions are much more optimistic. We show that, starting only from the presently known (v/c) 5 -accurate (finite mass) post-Newtonian results [6][7][8][9][10], but using them in a novel way, we can construct new template wave forms having overlaps larger than 96.5% with the "exact" wave forms. Since a reduction in signal-to-noise ratio by 3% only results in a loss in the number of events by 10%, and since our computations indicate that the new templates entail only small biases in the estimate of signal parameters (see Tables V and IX below), we conclude that presently known post-Newtonian results will be adequate for many years to come.
Before entering the details of our construction, let us clarify, at the conceptual level, the general methodology of this work. Central to our discussion is the following data analysis problem: On the one hand, we have some exact gravitational wave form h X (t; λ k ) where λ k , k = 1, . . . , n λ are the parameters of the signal (comprising, notably, the masses m 1 and m 2 of the members of the emitting binary [16]). On the other hand, we have theoretical calculations of the motion of [17], and gravitational radiation from [6][7][8][9][10], binary systems of compact bodies (neutron stars or black holes). The latter calculations give the post-Newtonian expansions (expansions in powers of v/c) of, essentially [18], two physically important functions: an energy function E(v) and a gravitational flux function F (v) (see exact definitions below). Here, the dimensionless argument v is an invariantly defined "velocity" [19] related to the instantaneous gravitational wave frequency f GW (= twice the orbital frequency) by v = (π m f GW ) where m ≡ m 1 + m 2 is the total mass of the binary. Let us denote by E Tn and F Tn the n th -order Taylor approximants of the energy and flux functions, where η ≡ m 1 m 2 (m 1 + m 2 ) 2 (1. 4) is the symmetric mass ratio. For finite η, the Taylor approximants (1.2), (1.3) are known for n ≤ 5 [17,[6][7][8][9][10]. In the test mass limit, η → 0, E(v) is known exactly and F (v) is known up to the order n = 11 [1][2][3][4][5]11]. [There are logarithmic terms appearing for n ≥ 6 that we shall duly discuss later, but in this Introduction we simplify the notation by not introducing them.] The problem is to construct a sequence of approximate wave forms h A n (t; λ k ), starting from the post-Newtonian expansions (1.2), (1.3). In formal terms, any such construction defines a map from the set of the Taylor coefficients of E and F into the (functional) space of wave forms (see Fig. 1). Up to now, the literature has considered only the most standard map, say T , (1. 5) obtained by inserting the successive Taylor approximants [20] (1.2), (1.3) into the integral giving the time evolution of the gravitational wave phase, see e.g. [12,13]. [Details are given below.] In this work, we shall define a new map, say "P ", based on a four-stage procedure ( (1. 6) The two essential ingredients of our procedure are: (i) the introduction, on theoretical grounds, of two new, supposedly more basic and hopefully better behaved, energy-type and flux-type functions, say e(v) and f (v), and (ii) the systematic use of Padé approximants (instead of straightforward Taylor expansions) when constructing successive approximants of the intermediate functions e(v), f (v). Let us also note that we further differ from previous attacks on the problem by using a numerical (discrete) fast Fourier transform to compute the overlaps between the exact and approximate wave forms. We find that the previously used analytical stationary phase approximation gives only poor estimates of the overlaps (see Table II).
One of the aims of the present paper is to show that the new sequence of templates h P n (t; λ) is, in several ways, "better" than the standard one h T n (t; λ). In this respect, it is convenient to introduce some terminology. We shall say that a multi-parameter family of approximate wave forms h A (t; µ k ), k = 1, . . . , n µ is an effectual model of some exact wave form h X (t; λ k ); k = 1, . . . , n λ (where one allows the number of model parameters n µ to be different from, i.e. in practice, strictly smaller than n λ ) if the overlap, or normalized ambiguity function, between h X (t; λ k ) and the time-translated family h A (t − τ ; µ k ), is, after maximization on the model parameters µ k [21], larger than some given threshold, e.g. max µ k A(λ k , µ k ) ≥ 0.965 [22]. [In Eq. (1.7) the scalar product h, g denotes the usual Wiener bilinear form involving the noise spectrum S n (f ) (see below).] While an effectual model may be a precious tool for the successful detection of a signal, it may do a poor job in estimating the values of the signal parameters λ k . We shall then say that a family of approximate wave forms h A (t; λ A k ), where the λ A k are now supposed to be in correspondence with (at least a subset of) the signal parameters, is a faithful model of h X (t; λ k ) if the ambiguity function A(λ k , λ A k ), Eq. (1.7), is maximized for values of the model parameters λ A k which differ from the exact ones λ k only by acceptably small biases [23]. A necessary [24] criterion for faithfulness, and one which is very easy to implement in practice, is that the "diagonal" ambiguity A(λ k , λ A k = λ k ) be larger than, say, 0.965.
Using this terminology, we shall show in this work that our newly defined map, Eq. (1.6), defines approximants which, for practically all values of n we could test, are both more effectual (larger overlaps) and more faithful (smaller biases) than the standard approximants Eq. (1.5). A related property of the approximants defined by Eq. (1. 6), is that the convergence of the sequence (h P n ) n∈N is both faster and much more monotonous than that of the standard sequence (h T n ) n∈N . This will be shown below in the (formal) test mass limit η → 0 where one knows both the exact functions E(v) and (numerically) F (v) [13], and their Taylor expansions to order v 11 [11]. The convergence will be studied both "visually" (by plotting successive approximants to E and F ) and "metrically" (by using the ambiguity function (1.7) to define a distance between normalized wave forms). Most of our convergence tests utilize the rich knowledge of the post-Newtonian expansions (1.2), (1.3) in the test mass limit η → 0. The very significant qualitative and quantitative advantages of the new sequence of approximants, Eq. (1.6), over the standard one, Eq. (1.5), when η → 0, make it plausible that the new sequence (h P n ) will also fare much better in the finite mass case 0 = η ≤ 1 4 . This question, that we can call the problem of the robustness of our results under the deformations brought by a finite value of η in the coefficients E k (η), F k (η) in Eqs. (1.2), (1.3), is more difficult to investigate, especially because one does not know, in this case, the "exact" results for E(v; η) and F (v; η). We could, however, check the robustness of our construction in two different ways: (i) by studying the "Cauchy criterion" for the convergence of the (short) sequence (h P 0 (η), h P 2 (η), h P 4 (η), h P 5 (η)) versus that of the corresponding Taylor sequence, and (ii) by introducing a oneparameter family of fiducial "exact" functions e X κ0 (v), f X κ0 (v) to model the unknown higher-order (n ≥ 6) η-dependent contributions to the post-Newtonian expansions (1.2), (1.3) and by studying for a range of values of the parameter κ 0 the convergence of the short sequence (h P 0 (η), . . . , h P 5 (η)) toward the fiducial "exact" wave form h X κ0 (η). Though we believe the work presented below establishes the superiority of the new approximants h P n over the standard ones h T n and shows the practical sufficiency of the presently known v 5 -accurate post-Newtonian results, we still think that it is an important (and challenging) task to improve the (finite mass) post-Newtonian results. Of particular importance would be the computation [25] of the v 6 -accurate (equations of motion and) energy function in confirming and improving our estimate below of the location of the last stable orbit for η = 0. Our calculations also suggest that knowing E and F to v 6 would further improve the effectualness (maximized overlap larger than 98%) and, more importantly, the faithfulness (diagonal overlap larger than 99.5%) to a level allowing a loss in the number of detectable events smaller than 1%, and significantly smaller biases (smaller than 0.5%) in the parameter estimations than the present O(v 5 ) results (about 1-5%).
The rest of this paper is organized as follows: In Sec. II we briefly discuss the phasing of restricted post-Newtonian gravitational wave forms, wherein corrections are only included to the phase of the wave form and not to the amplitude, indicating the way in which energy and flux functions enter the phasing formula. Various forms of energy and flux functions are introduced in Secs. III and IV, respectively and their performance compared. The ambiguity function, which is the overlap integral of two wave forms as a function of their parameters, is discussed in Sec. V and some details of its computation by a numerical fast Fourier transform are given. In Sec.VI we present the results of our computations in the test mass case while in Sec.VII we investigate the robustness of these test mass results as completely as possible. Sec.VIII contains our summary and concluding remarks. The paper concludes with two appendices. In Appendix A we discuss the Padé approximants, their relevant useful properties and list some useful formulas used in the computations. In Appendix B we discuss carefully the issue of optimizing over the phases and provide a clear geometrical picture to implement the procedure.
II. THE PHASING FORMULA
To get an accurate expression for the evolving wave form h ij (t) emitted by an inspiralling compact binary one needs, in principle, to solve two interconnected problems: (i) one must work out (taking into account propagation and nonlinear effects) the way the material source generates a gravitational wave, and (ii) one must simultaneously work out the evolution of the source (taking into account radiation-reaction effects). The first problem, which in a sense deals mainly with the (tensorial) amplitude of the gravitational signal is presently solved to order v 5 [6][7][8][9][10]. Such an approximation on the instantaneous amplitude h ij seems quite sufficient in view of the expected sensitivity of the LIGO/VIRGO network. On the other hand, the second problem, which determines the evolution of the phase of the gravitational signal, is crucial for a successful detection. For simplicity, we shall work here within the "restricted wave form" approximation [26], i.e. we shall focus on the main Fourier component of the signal, schematically h(t) = a GW (t) cos φ GW (t), where the gravitational wave phase φ GW is essentially, in the case of a circular binary, twice the orbital phase Φ : φ GW (t) = 2Φ(t).
We find it conceptually useful to note the analogy between the radio-wave observation of binary pulsars and the gravitational-wave observation of a compact binary. High-precision observations of binary pulsars make a crucial use of an accurate "timing formula" [27] linking the rotational phase of the spinning pulsar (stroboscopically observed when φ PSR n = 2πn with n ∈ N ) to the time of arrival t n on Earth of an electromagnetic pulse, and to some parameters p i . Similarly, precise observations of an inspiralling compact binary, will need an accurate "phasing formula", i.e. an accurate mathematical model of the continuous evolution of the gravitational wave phase involving a set of parameters {p i } carrying information about the emitting binary system (such as the two masses m 1 and m 2 ). Heuristically relying on a standard energy-balance argument, the time evolution of the orbital phase Φ is determined by two functions: an energy function E(v), and a flux function F (v). Here the argument v is defined by Eq. (1.1) which can be rewritten in terms of the instantaneous orbital angular frequency Ω (as above m ≡ m 1 + m 2 denotes the total mass of the binary). The (dimensionless) energy function E is defined by where E tot denotes the total relativistic energy (Bondi mass) of the binary system. The flux function F (v) denotes the gravitational luminosity of the system (at the retarded instant where its angular velocity Ω is given by Eq. (2.3)).
Note that the three quantities v, E and F are invariantly defined (as global quantities in the instantaneous center of mass frame), so that the two functions E(v), F (v) are coordinate-independent constructs. Denoting as above the symmetric mass ratio by η ≡ m 1 m 2 /(m 1 + m 2 ) 2 , the energy balance equation dE tot /dt = −F gives the following parametric representation of the phasing formula Eq. (2.2) (written here for the orbital phase) where t c and Φ c are integration constants, and where for lisibility we have not introduced a new name (such as v ′ ) for the dummy integration variable. Note that E ′ (v) < 0, F (v) > 0 so that both t and Φ increase with v. For definiteness, we have written the integrals in Eqs. (2.5), (2.6) in terms of a specific reference velocity, chosen here to be the velocity corresponding to the last stable circular orbit of the binary. Note that the choice of such a reference point is, in fact, entirely arbitrary and a matter of convention as one introduces the two integration constants t c and Φ c (which will be optimized later). The choice v ref = v lso is technically and physically natural as it is the value where the integrand vanishes (because of E ′ (v)). The definition (and properties) of our approximants do not depend on this choice and the reader is free to use instead his/her favorite reference point. On the other hand, what is not a matter of convention is that, in absence of information about the coalescence process, we shall also use v lso to define the time when the inspiral wave form shuts off. The numerical value of v lso in the case of a test mass orbiting a black hole (i.e. the limiting case η → 0) is 1/ √ 6. In the case of binaries of comparable masses (η = 0) v lso is the value of v where E ′ (v) vanishes. We will discuss below ways of estimating v lso (η). Knowledge of v lso (considered now has a physical quantity affecting the signal and not as a simple reference point) is important in gleaning astrophysical information since the inspiral wave form would shut off at that point and the coalescence wave form, whose shape depends on equation of state of stars, etc., would begin. One of the questions we address below is whether (as had been suggested [13]) knowledge of v lso (η) is crucial for getting accurate inspiralling wave templates.
To warm up, let us recall that in the "Newtonian" approximation (i.e. when using the quadrupole formula for the gravitational wave emission) one has so that the above formulas reduce (after redefining the constants of integration, or, equivalently, formally setting v lso = ∞) to The explicit Newtonian phasing formula is obtained by eliminating v and given by ; where τ ≡ η 3/5 m ("chirp time scale"). (2.9) The corresponding Newtonian gravitational wave amplitude is (for some constant C) so that the explicit Newtonian templates read . (2.11) The crucial issue for working beyond the Newtonian approximation is the availability of sufficiently accurate representations for the two functions E ′ (v) and F (v). In the astrophysically interesting case of two comparable masses orbiting around each other neither of the functions E(v) or F (v) is known exactly and thus one must rely on a post-Newtonian expansion for both these quantities. The question is how accurate should our knowledge of the 'energy function' E(v) and the 'flux function' F (v) be so that we have only an acceptable reduction in the event rate and a tolerable bias in the estimation of parameters. Given some approximants of the energy and flux functions (as functions of v), say E A (v), F A (v), and given some fiducial velocity [28] v A lso , we shall define a corresponding approximate template by the following parametric representation in terms of v: , (2.14) .
To compute explicitly h A (t) we numerically invert Eq. (2.14) to get v = V A (t) and substitute the result in the other equations: ]. Note that we use the Newtonian approximation for the amplitude as a function of v. We could use a more refined approximation, such as an effective (main Fourier mode) scalar amplitude . However, our main purpose here being to study the influence of the choice of better approximants to the phase evolution on the quality of the overlaps, it is conceptually cleaner to stick to one common approximation for the amplitude (considered as a function of our principal independent variable, v).
The standard approximants for E(v) and F (v) are simply to use their successive Taylor approximants, Eqs. (1.2), (1.3). Our strategy for constructing new approximants to E(v) and F (v) is going to be two-pronged. On the one hand, using the knowledge of these functions in the test-mass limit and general theoretical information about their mathematical structure we shall motivate the use of representations of E(v) and F (v) based on other, supposedly more basic energy-type and flux-type functions, say e(v) and f (v). On the other hand, we shall construct Padé-type approximants, say e Pn , f Pn , for the "basic" functions e(v), f (v), instead of straightforward Taylor approximants. We shall then compare the performance of the various phasing formulas defined by inserting in Eqs. (
III. ENERGY FUNCTION
Let us motivate the introduction of a new energy function e(v) as a more basic object, hopefully better behaved than the total relativistic mass-energy E tot , Eq. (2.4), of the binary system. For this, let us consider the limit m 2 /m 1 → 0. In this test body limit, i.e a test particle m 2 moving in the background of a Schwarzschild black hole of mass m 1 , the total conserved mass-energy of the binary system reads where k µ is the time-translation Killing vector, and p µ 2 the 4-momentum of the test mass. [The quantity E 2 ≡ −k µ p µ 2 is the well-known conserved relativistic energy of a test particle moving in a stationary background.] At infinity k µ = p µ 1 /m 1 , so that the formal expression of E tot is E tot = m 1 − (p 1 · p 2 )/m 1 . This expression is clearly very asymmetric in the labels 1 and 2 and has bad analytical properties as a function of m 1 . Both problems are cured by working instead with the standard Mandelstam variable s = E 2 tot = −(p 1 + p 2 ) 2 = m 2 1 + m 2 2 − 2(p 1 · p 2 ). Further, it is known that, in quantum two-body problems, the symmetric quantity is the best energy function to consider when trying to extend one-body-in-external-field results to two-body results [29]. In the limit m 2 ≪ m 1 the quantity ǫ reduces simply to ǫ = −(p 1 · p 2 )/m 1 m 2 = E 2 /m 2 + O(η).
In the case of a test mass in circular orbit around a Schwarzschild black hole the explicit expression of the quantity ǫ in terms of the invariant argument The explicit test-mass result (3.3) suggests that the (unknown) exact two-body function ǫ(x) will have also some ∼ (x − x 0 ) −1/2 singularity in the complex x-plane. This lead us finally to consider, instead of the function ǫ, its square or, equivalently, the new energy function Note that we assume here that the total instantaneous relativistic energy of a binary system (in the center of mass frame) can be defined as a time-symmetric functional of positions and velocities (so that E(v) depends on v only through x ≡ v 2 ), as the quantityẼ even discussed in Sec. VII of Ref. [30]. It remains, however, unclear whether such a quantity is well defined at very high post-Newtonian orders and whether it is then related to the gravitational wave flux by the standard balance equation. Summarizing, our proposal is to use as basic (symmetric) energy function the quantity e(x), Eq.
. Given any (approximate or fiducially "exact") function e(x), we shall then define the corresponding function E(x) (with x ≡ v 2 ) entering the phasing formulas (2) by solving Eq. (3.4) in terms of E tot ≡ (m 1 + m 2 )(1 + E). Explicitly, this gives The associated v-derivative entering the phasing formula reads Having defined our new, basic energy function e(x), it remains to define the approximants of e(x) that we propose to use, when one knows only the Taylor expansion of E(x). For guidance, let us note that by inserting Eq. (3.3) into Eq. (3.4) one gets the following exact expression for the test-mass limit of the function e(x) The generalization of the expansion Eq. (3.7) to non zero values of η is only known to second post-Newtonian (2PN) accuracy. Using Eq. (4.25) of Ref. [7], that is, we compute the 2PN expansion of the function e(x) for a finite η: The basic idea behind our proposal is that on the grounds of mathematical continuity [31] between the case η → 0 and the case of finite η one can plausibly expect the exact function e(x) to be meromorphically extendable in at least part of the complex plane and to admit a simple pole singularity on the real axis ∝ (x − x pole ) −1 as nearest singularity in the complex x-plane. We do not know the location of this singularity when η = 0, but Padé approximants are excellent tools for giving accurate representations of functions having such pole singularities. For example, if we knew only the 2PN-accurate (i.e. O(v 4 )) expansion of the test-mass energy function (6), namely e 2PN (x; which coincides with the exact result, Eq. (3.7). Having reconstructed the exact function e(x), we have also reconstructed, using only the information contained in the 2PN-accurate expansion, the existence and location of a last stable orbit. Indeed, using Eqs. (3.6) and (3.10) we find which is the exact test mass expression exhibiting a last stable orbit at v lso = 1/ √ 6. In Table I we have compared at different post-Newtonian orders the x lso ≡ v 2 lso predicted by the standard post-Newtonian series and the Padé approximation to the same.
It is important to note that our assumption of structural stability between e(x; η = 0) and e(x; η) with 0 < η ≤ 1 4 is internally consistent in the sense that the coefficients of x and x 2 in the square brackets of Eq. (3.9) fractionally change, when η is turned on, only by rather small amounts: η/3 ≤ 1 12 ≃ +8.3% and −35η/36 ≤ −35/144 ≃ −24.3%, respectively. This contrasts with other attempts to consider η as a perturbation parameter, such as Ref. [32]. Indeed, in the quantities considered in the latter work several of the 2P N terms have coefficients that vary by very large fractional amounts as η is turned on: some examples being 12 + 29η, 2 + 25η + 2η 2 , 4 + 41η + 8η 2 in Eqs. (2.2) of the second reference in [32]. Moreover, the fact that many of the coefficients in their Eqs. (2.2) increase when η is turned on (like the ones quoted above) is not a good sign for the reliability of their approach as it means, roughly, that the radius of convergence of the particular series they consider tends to decrease as η is turned on. We shall attempt below to further test the robustness of our proposal.
In summary, our proposal is the following: Given some usual Taylor approximant to the normal energy function, in which the only known coefficients are Then, one defines the improved approximant corresponding to Eq. (3.12) by taking the diagonal (P m m , if n = 2m) or subdiagonal (P m m+1 , if n = 2m + 1) Padé approximant of −x −1 e T2n (x): where ǫ = 0 or 1 depending on whether n ≡ 2m + ǫ is even or odd. For completeness, we recall the definition and basic properties of Padé approximant in Appendix A. Let us only mention here that the P m m+ǫ approximants are conveniently obtained as a continued fraction. For instance, the Padé approximant of the 2P N -approximate e T4 (x) = −x(a 0 + a 1 x + a 2 x 2 ) is: By demanding that this agrees with e T4 to order v 4 we can relate the c n 's in the above equation to the a n 's in Eq. (3.13): Explicitly, this gives, so that Given a continued fraction approximant e Pn (x) of the truncated Taylor series e Tn of the energy function e(x) the corresponding E(x) and E ′ (x) functions are obtained using: (3.20) Thus, for instance where E Pn is given by Eq. (3.19). The hatted notation introduced in the left-hand side of Eq. (3.21) will again be used below and indicates that one is dividing some function of v by its Newtonian approximation: e.g.
Having argued that e P4 (x), Eq. (3.18), and the corresponding E P4 (x) defined by Eq. (3.19), are better estimates of the finite-mass energy functions than their straightforward post-Newtonian approximations, Eqs. (3.8), (3.9), we can use our results so far to estimate both the location of the last unstable circular orbit (light ring) and that of the last stable circular orbit. The functionsê P4 (v),Ê P4 (v) are plotted in Fig. 2 together withê T4 (v) andÊ T4 (v), both sets for η = 1/4, and compared with the exact functionsê(v) andÊ ′ (v) in the η = 0 (i.e. test mass) case. We see that the η = 1/4 P -and T -approximants are smooth deformations of their test-mass limits. Note that the variable x ≡ v 2 is, in the limit η → 0, equal to m/r in Schwarzschild coordinates and can be used as a smooth radial coordinate. If we wished we could also introduce the function J tot (x) giving the x-variation of the total angular momentum. It is indeed related to the total energy E tot (x) by the general identity (for circular orbits) d E tot = Ω d J tot where the circular frequency is given by m Ω = v 3 = x 3/2 . The consideration (even without knowing its precise analytical form) of the effective potential for general (non circular) orbits E tot = E tot (r, J tot ) in terms of any smooth radial-type variable r measuring the distance between the two bodies allows one to see (by smooth deformation from the η = 0 case) that the minimum of E tot (x) (which necessarily coincides with the minimum of J tot (x)) defines the last stable circular orbit. Indeed, it is the confluence of the one-parameter sequence of minima of E tot (r, J tot ) considered as a function of r for fixed J tot (stable circular orbits) with the one-parameter sequence of maxima of E tot (r, J tot ) (unstable circular orbits). Note also, from Eq. (3.20), that the last stable orbit (minimum of E(x)) necessarily coincides with the minimum of the function e(x). As for the last unstable circular orbit it is clearly defined by the square-root singularity ∝ (x − x pole ) −1/2 of E(x), corresponding to a simple pole (x − x pole ) −1 in e(x). Applying these general considerations to our specific 2P N -Padé proposal (3.18) one easily finds that we predict the following "locations" (in the invariant x variable) for both the light ring (corresponding to r = 3m for a test mass around a Schwarzschild black hole), (3.22) and for the last (circular) stable orbit, We recall that x is invariantly defined in terms of the orbital circular frequency Ω = 2π f orb through x = (m Ω) 2/3 , so that the gravitational wave frequency (twice the orbital frequency) reads In the equal mass case (η = 1/4) Eqs. ( (η) < 1, decreasing with η. This is an important physical difference as it means, if we are right, that binary systems of comparable masses can get closer, orbit faster and emit more gravitational waves before plunging in than estimated in Ref. [32]. As said above, we think that the "hybrid" approximation used in Ref. [32] is not reliable, notably because of the strong η-dependence (and consequent increase) of the coefficients in their expansion (see also the related criticism of Ref. [34]). We think that our approach (in which the expansion coefficients to e(x) are less strongly modified by η and where the crucial coefficient a 2 decreases with η which means a larger radius of convergence) is more likely to indicate the correct trend. We have tried in several ways to test the robustness of our conclusions under the addition of higher post-Newtonian corrections to Eq. (3.9). We think, however, that such attempts are not really conclusive because one does not know in advance what is the "plausible" range of values of 3P N and higher η-dependent corrections. [We note in this respect that the range considered in Ref. [32], |α i | max = |β i | max = 10, is clearly too small as it means, for instance, a fractional change in the coefficient of (m/r) 3 when η changes from 0 to 1/4 of η|α 3 |/16 < 16%, while the known fractional change in the coefficient of (m/r) 2 is already η 29/12 > 60%.] In fact, the relative change (ratio a k (1/4)/a k (0) when η changes from 0 to 1/4) of the successive coefficients in any power series, such as the a k (η) in Eq. (3.12) is expected to increase (or decrease) exponentially with the order k due to an η-dependent shift of the convergence radius. For instance, in our case if we write the 3P N coefficient as a 3 (η) = −9(1 + κ 3 η) to model the 3P N η-dependence it is not meaningful to consider a priori that κ 3 can take any values in the range ±κ 2 ≃ ±1 (where we introduced a 2 (η) = −3(1 + κ 2 η) with κ 2 = −35/36). As the negative value of κ 2 has indicated an increase of the radius of convergence with η (x P4 A value of κ 3 very different from this estimate (i.e. a value of a 3 (1/4) very different from −4.8) would mean that the coefficient a 2 (1/4) was accidentally smaller than normal (in which case our estimates (3.22), (3.23) would not be reliable). In conclusion, we think that, given the presently available information, our estimates are more internally consistent than previous ones (which include the relevant works quoted in Ref. [32]), but that, if a 2 (η) is only "accidentally" decreased by turning on η, they might be off the mark. It will be possible to make more precise statements on the reliability of Eq. (3.23) only when the 3P N equations of motion of a binary system are derived (or when numerical calculations can reliably locate the last stable orbit). Anyway, we shall see that a knowledge of the LSO is not so crucial for extracting the inspiral wave form. [We shall notwithstanding test below the robustness of our overall approach under possible uncertainties in the locations of x pole (η) and x lso (η).] This is because: (a) Interferometer noise rises quadratically beyond a certain frequency; consequently the noise level is pretty high before light binaries, such as NS-NS and NS-BH, reach the LSO; only in the case of more massive binaries consisting of black holes and/or supermassive stars with total mass in excess of 25 M ⊙ , in the case of initial LIGO, and 60 M ⊙ , in the case of advanced LIGO, will the frequency at the LSO be in a region where the detector noise is low. In such cases it is important to know the location of the LSO accurately because it helps in appropriately truncating the inspiral wave form in search templates so that it would not produce anticorrelation with the coalescence wave form which is itself not known, as of now, to any accuracy. In the case of lighter mass binaries what is really needed is that the approximate energy function should match the exact one at frequencies where the detector noise is the least. This is also true for the flux function as we shall see in the next Section.
IV. FLUX FUNCTION
Contrary to the case of the energy function where we could draw on a lot of theoretical information, we have less general a priori information on the structure of the flux function F (v). The exact gravitational wave luminosity F is not known analytically. It has, however, been computed numerically with good accuracy in the test particle limit [13] and we shall use this in our study. In the test particle limit the flux is also known analytically to a high order in perturbation theory; to order v 11 [11] we have where the various coefficients A k and B k can be read off from [11], By contrast, in the comparable masses case only the first five Taylor approximants of F (v; η) are known [6][7][8][9][10]. Explicitly, B k (η) = 0 (k ≤ 5) and There is, however, a bit of general information about the function F (v) which can be used to motivate the consideration of a transformed flux function, say f (v), as a better behaved object. Indeed, as pointed out in Ref. [2], the function F (v; η = 0) has a simple pole at the light ring (r = 3m, i.e. x ≡ v 2 = 1 3 ). The origin of this pole is simple to understand physically in a flat spacetime analog. [It is seen from Refs.
[1] and [2] that the curved-spacetime effects (metric coefficients, Green function) do not play an essential role and that the origin of the pole can be directly seen in the source terms, Eqs. (2.14) of Ref. [1].] Let us consider two (for simplicity identical) mass points, linked by a relativistic (Nambu-Goto) string, orbiting around each other on a circle (the string tension T providing the centripetal force opposing centrifugal effects). One can easily find the exact solution of this problem and then estimate the linearized gravitational waves emitted by the system [35]. Let us keep fixed the rest masses m 1 = m 2 = m/2 and the radius of the orbit R and increase the tension T so that the particles' velocities v tend to the velocity of light. In this limit, one finds that RT ∼ p = mv/ 1 − v 2 /c 2 and that the gravitational wave amplitude h ∝ RT + p ∼ p. By taking a time derivative and squaring one sees that, as v → c, the gravitational flux F ∼ Ω 2 h 2 ∝ p 2 tends to infinity like (1 − v 2 /c 2 ) −1 . This shows that the finding of Refs. [1,2] is quite general and that, in particular, it is very plausible that a binary system of comparable masses will have a simple pole in F (v) when the bodies tend to the light ring orbit. We have seen above that the light ring orbit corresponds to a simple pole x pole (η) in the new energy function e(x; η). Let us define the corresponding (invariant) "velocity" v pole (η) ≡ x pole (η). This motivates the introduction of the following "factored" flux function Note that multiplying by 1 − v/v pole rather than 1 − (v/v pole ) 2 has the advantage of regularizing the structure of the Taylor series of f (v) in introducing a term linear in v (which is absent in Eq. (4.1)). Two further tricks will allow us to construct well converging approximants to f (v). First, it is clear (if we think of v as having the dimension of a velocity) that one should normalize the velocity v entering the logarithms in Eq. (4.1) to some relevant velocity scale v 0 . In absence of further information the choice v 0 = v lso (η) seems justified (the other basic choice v 0 = v pole is numerically less desirable as v will never exceed v lso and we wish to minimize the effect of the logarithmic terms). A second idea, to reduce the problem to a series amenable to Padéing, is to factorize the logarithms by writing the f function in the form (4.5) The ellipsis in Eq. (4.5) are meant to represent possible higher powers of ln v v lso . [Such terms do not show up at order v 11 when η = 0 and will be also of no concern when considering the η = 0 results at order v 6 .] The coefficients f k are functions of v lso in general [36].
Finally, we define our approximants to the factored flux function f (v) as where v Pn lso (η) denotes the LSO velocity (≡ √ x lso ) for the v n -accurate Padé approximant of e(x), and where P m m+ǫ denotes as before a diagonal or subdiagonal padé with n ≡ 2m + ǫ, ǫ = 0 or 1. The corresponding approximant of the flux F (v) is then defined as where v Pn pole (η) denotes the pole velocity defined by the v n -Padé of e(x). For instance, from Eq. (4.9) We find (remarkably?) that in the test particle limit the O(v 9 ) logarithmic term vanishes identically: ℓ 9 (η = 0) ≡ 0. The other coefficients are numerically (η = 0 .063926928123, f 9 = 1188.0521512280, f 10 = −2884.9014287843, and f 11 = 2823.3603070298. As for the log-factor in Eq. (4.6) we find that when it is not identically 1 (i.e. when n ≥ 6) it is always smaller than about 1.005 for v ≤ v lso ≃ 0.40825 and much closer to 1 when v < ∼ 0.2. Although it is unpleasant to have logarithms mixing with powers, they do not seem to introduce, in the present case (after normalization to v lso and factorization), a serious obstacle to constructing good approximants to f (v).
Our primary aim in this work is to compare and contrast the convergence properties of the standard ("Taylor") approximants to the phasing formula and its building blocks E(v) and F (v) with the new approximants defined above (with their two-stage construction E[e P ] and F [f P ]). Let us first discuss the case of the flux function which can be studied in detail in the limiting case η → 0. Indeed, in this case one knows both the "exact" (numerical) flux function [13], say F X (v) and its post-Newtonian expansion up to order v 11 [11]. We can then compare directly the approach toward F X (v), on the one hand, of the successive standard Taylor approximants F Tn (v; η = 0) (obtained by keeping only the A k and B k with k ≤ n in Eq. (4.1)) and, on the other hand, of the new approximants F Pn (v; η = 0) defined by Eqs. (4.6). This comparison of convergence is illustrated in Fig. 3. We have plotted there, for convenience, the "Newton-normalized" flux functions It is clear that the P -approximants converge to the exact values much faster than the Taylor ones. The monotonicity of the convergence of the P -approximants is also striking. However, the P -approximants of the flux at certain orders (notably v 7 and v 10 ) exhibit poles that happen to lie in the region of integration: v low < v < v lso . Such P -approximants are obviously a bad choice for the construction of templates. Nevertheless, this does not mean that one cannot construct P -approximants at that order at all. Recall that in this study we have only considered diagonal and subdiagonal Padé approximants of type P m m and P m m+ǫ , respectively. It is perfectly legitimate to employ other types of Padé's and in particular the superdiagonal Padé of type P m+ǫ m . For instance, there is a pole in the region of interest in P 3 4 -approximant of flux while it turns out that the P 4 3 -approximant (which is the one we have used in this work instead of P 3 4 ) does not have a pole in the region of interest. Thus, if one wishes one may trade off a spurious zero, in the region of interest, in the denominator of the function with a zero of the numerator, thereby removing the troublesome pole (see Appendix A for how this may be accomplished via some simple properties of the Padé approximants). For completeness we exhibit in Fig. 4 the successive P -approximants to the factored flux function f (v; η = 0).
The other building block of the phasing formula Eqs. (3.10) are the approximants to the function E ′ (v) = dE(v)/dv. As we have constructed E Pn (v) so that it coincides for n ≥ 4 with the exact E X (v) in the case η = 0 it would not be fair to compare it to the straightforward E Tn (v). We need, therefore, to consider the finite mass case η = 0. However, in this case, we only know few P N approximations and we do not know the exact result. We can formally bypass this problem and have a first test of the robustness of our construction by defining the following fiducial "exact" energy function e κ0 X (x): The 2P N expansion of e κ0 X (x) coincides by construction with that of the "real" e(x; η). The parameter κ 0 enters only 3P N and higher order terms. Note that κ 0 parametrizes an infinite number of P N terms in a non-perturbative manner because it determines the location of the pole singularity of e κ0 X , namely If we believe our 2P N Padé estimate (3.22) we would expect that a good estimate of the "real" κ 0 (when considering η = 1 4 ) should be such that 1 − κ P4 0 /4 = (1 − 35/144)/(1 + 1/12), i.e. κ P4 0 = +47/39 ≃ +1.2051. To test formally the convergence of the sequence of P -approximants away from the region where we know by construction that it would converge very fast we shall consider a value of κ 0 substantially different from the Padé-expected one, for instance simply κ 0 = 0 which says that the "exact" pole stays when η = 0 at the test mass value 3x = 1 instead of our result above 3x P4 pole 1 4 ≃ 1.4312. Working again with "Newton-normalized" functions, now we compare in Fig. 5 the convergence of E ′ Tn (v) and E ′ Pn (v) toward the fiducial "exact" e κ0 X (v) defined by Eq. (4.11) for η = 1 4 , κ 0 = 0. For completeness we exhibit also in Fig. 6 the successive P -approximants to the "basic" energy function e κ0 X . The convergence tests performed in this Section have shown at the visual level that the P -approximants behaved better than the T -ones. However, the real convergence criterion we are interested in is that defined by overlaps, to which we now turn.
V. AMBIGUITY FUNCTION
Central to our discussion is the ambiguity function which is a measure of the overlap of two wave forms that may differ from each other in not only their parameter values but also in their shape. For instance, one of them could be a first post-Newtonian signal corresponding to non-spinning stars parametrized by masses of the two stars and the other may be a second post-Newtonian inspiral wave form corresponding to spinning stars parameterized not only by the masses of the two stars but also by their spins. Let us therefore consider two wave forms h(t; λ k ) and g(t; µ k ) where λ k , k = 1, . . . , n λ , and µ k , k = 1, . . . , n µ , are the parameters of the signals and n λ and n µ are the corresponding number of parameters. The scalar product of these two wave forms is defined in Fourier space by where τ is the lag of one of the wave forms relative to the other,h(f ; λ k ) andg(f ; µ k ) denote the Fourier transforms [37] of h(t; λ k ) and g(t; µ k ), respectively, * denotes complex conjugation and S n (f ) is the two-sided noise power spectral density. The above scalar product is also the statistics of matched filtering (Wiener filter) which is the strategy used in detecting inspiralling binary signals. S n (f ) being a (positive) real, even function of f the scalar product (5.1) defines a real bilinear form in h and g. We introduce also the norm h ≡ h, h . The ambiguity function A is defined as the value of the normalized scalar product maximized over the lag parameter τ : where optimization over phases of the signal and the template is symbolically indicated by φ (see Appendix B for details). Here λ k can be thought of as the parameters of a signal while µ k those of a template. The signal to noise ratio (SNR) for detecting a noise contaminated version of h(t) with a Wiener filter built from g(t − τ ) reads SNR = h, g / g . Its maximum value is SNR max = h, h / h = h when the time-translated g is perfectly matched to the signal: g(t − τ ) = h(t). Therefore A(λ k , µ k ) is the reduction in SNR obtained using a template that is not necessarily matched on to the signal. The dependences of A(λ, µ) on both λ and µ are important in designing detection strategies. The dependence on the signal parameters λ, given some template parameters µ, allow one to define an optimal way of paving the template parameter space. The region in the signal parameter space for which a given template obtains SNRs larger than a certain value [38] (sometimes called the minimal match [39]) is the span of that template [40] and the templates should be so chosen that together they span the entire signal parameter space of interest with the least overlap of one other's spans. In our case, we are mainly interested in keeping the signal parameters λ fixed, and varying the template ones µ. In searching for a coalescing binary signal in the output of a detector one maximizes over a given bank of templates (i.e. over a dense lattice of µ values). Thus, the quantity of interest is the maximum of the ambiguity function over the entire parameter space of templates. This maximum, in the case of identical signals, occurs when the parameters of the template and the signal are equal and is equal to 1. However, in reality the template wave forms are not identical to the fully general relativistic signal and hence the maximum overlap will in general be less than 1 (Schwarz inequality) and would occur not when the parameters are matched but when they are mismatched: If the template wave forms are not 'close' to signal wave forms then it is reasonable to expect that the maximum occurs when |λ k −µ k | is fractionally rather large. In this case there is not only a substantial reduction in the maximum SNR that can be achieved by using such a bank of templates but there would also be a large systematic bias in the measurement of parameters. Using the terminology of the Introduction such template wave forms would be neither effectual nor faithful. For detection purposes we wish to construct effectual templates, i.e. templates having large overlap after maximization over µ. For parameter estimation we further need to construct faithful templates which have large overlaps when µ ≃ λ. A practical (non rigorous) criterion for faithfulness is that the "diagonal" ambiguity function A(λ, λ) be close to 1. Reduction in the overlap of template wave forms and true signals has an effect on the number of detectable events or, equivalently, loss in the detection probability of a signal of a given strength. For a given signal-to-noise ratio, the distance up to which a detector can see depends primarily on the amplitude h 0 of the wave. Unavailability of a copy of the true signal means that the effective strength of the signal reduces from h 0 to Ah 0 and hence the span of a detector reduces by the factor A. The number of events a detector can detect being proportional to the cube of the distance, a reduction in the overlap by a factor A means a drop in the number of detectable events, as compared to the case where a knowledge of the true wave form was available, by a factor A 3 . For instance, a 10% (20%) loss in the overlap would mean a 27% (50%) loss in the number of events [39]. The aim of PN calculations is to make this overlap as close to 1 as possible. If we demand that we should be able to detect with PN templates about 90% (99%) of the signals that we would detect had we known the general relativistic signal, then we should have the overlap to be no less than about 0.965 (0.997).
As a model for noise we use the expected noise power spectral density in the initial LIGO interferometer [41]: where S 0 , α and f 0 are constants that characterize the detector sensitivity, effective bandwidth and the frequency at which the detector noise is the lowest, respectively. In the case of initial LIGO α = 2, f s = 40 Hz and f 0 = 200 Hz. Due to the fact that the noise is essentially infinite below the seismic cutoff f s and since we terminate the template wave forms when the velocity reaches that of the last stable orbit the overlap integral (5.2) reduces to where f lso is the gravitational wave frequency corresponding to the last stable orbit. In order to compute the maximum overlap we proceed in the following manner. The evolution of phase as a function of time is obtained by inverting numerically v in terms of t from Eq. (2.14) and inserting the result in Eq. (2.15) and then (2.13). Though, the iterative procedure in inverting v in terms of t is rather computationally intensive, we need to employ it since the inaccuracies introduced by the stationary phase approximation in computing the Fourier transform of the wave form increase with the order of approximation especially in the case of NS-BH and BH-BH binaries. In Table II, we give a measure of the inaccuracies introduced by the stationary phase approximation at various post-Newtonian orders by computing the integral in Eq. (5.5) withh(f ) being the fast Fourier transform andg(f ) being the stationary phase approximation (The three cases A 0 , B 0 and C 0 are defined below).
VI. RESULTS AND DISCUSSION
Having in hand the ambiguity function to measure the closeness of two wave forms [42] we can use it to pursue at a quantitative level the analysis of the convergence of the sequence of approximants defined above.
Let us first consider the wave forms defined in the formal test mass limit where one keeps the η factors in front of E(v) and F (v) but neglects the η-dependence in the Taylor coefficients of E ′ (v) and F (v). Explicitly we mean the wave forms defined by eliminating (numerically) v between in which v lso = v lso (η = 0) = 1/ √ 6. Note that the main purpose of the overlap computations made for this formal test mass limit is to compare quantitatively the convergence of the P -approximants to that of the T -ones. One should keep in mind that when studying below in the formal test mass limit (1.4m ⊙ , 1.4m ⊙ ) or (10m ⊙ , 10m ⊙ ) systems (for which η takes its largest value) the absolute values of the overlaps are not reliable, though one assumes that the lessons learned from the P/T comparison are. The absolute values of the overlaps for the (1.4m ⊙ , 10m ⊙ ) case are probably more reliable, but this is not clear as η ≃ 0.1077 is then only a factor 2.32 smaller that η max = 0.25. This being said we wish to compare semi-maximized overlaps that we can denote for simplicity as Here the superscript 0 on T , P or X denotes the above defined formal η = 0 limit of Taylor, Padé-type or eXact wave forms, respectively (i.e. A = T , P or X in Eqs. (6.1)-(6.3)). Here one considers only the same values for the two dynamical parameters of those signals (i.e. the explicit m and η appearing in Eqs. (6.1)-(6.3), here expressed in terms of m 1 and m 2 in order to psychologically minimize the formal inconsistency of setting η = 0 in part of the formula and keeping it elsewhere) and maximize over the kinematical ones t A c , Φ A c , t X c , Φ X c . To maximize over the reference times, it is sufficient (as indicated above) to fix t X c = 0 and maximize over t A c = t c (τ , the time lag). Maximizing over the reference phases is more subtle as the overlap depends separately on Φ A c and Φ X c and not only on their difference. There is, however, a computationally non-intensive way to do it which is based on a conceptually simple geometrical formulation of the problem (see Appendix B).
Note that in Eqs. (6.4) the approximate template parameters are not optimized, but are taken to be equal to that of the exact signal. In other words we compare the faithfulness of the various approximants together with their convergence properties. The results are given in Table III, for n = 4 − 11 [43] as well as for the Newtonian approximants for the purpose of comparison. The overlaps quoted are the minimax overlaps, Eq. (B12), together with the corresponding best overlaps Eq. (B11) in parenthesis below the minimax overlap. (The P -approximant P 3 4 corresponding to n = 7 has a singularity in the region of interest and hence we have used the approximant P 4 3 . The P 5 5 -approximant too has a pole and we have not computed the overlaps in this case though if one desires one can compute other P -approximants, such as P 6 4 or P 4 6 , at this order.) We consider three prototype cases, say case , and case C 0 [(10m ⊙ , 10m ⊙ )]. We added an index zero to recall the fact that η = 0 has been used in E ′ and F . [One should keep in mind the warning above that the numerical results for case B 0 are physically more reliable, while A 0 and C 0 are just mathematical ways of testing the convergence.] We performed another convergence test (still in the formal η → 0 limit) of a different nature. It is known in mathematics that one does not need to know in advance the limit of a sequence to test its convergence. One can instead use Cauchy's criterion which says (roughly) that the sequence converges if, given some distance function d(h, g), d(h n , h m ) → 0 as both n and m get large. In our case we have a distance function [44] defined by the ambiguity function and we can compare the Cauchy convergence of the T and P approximants. Some results are given in Table IV where one exhibits the semi-maximized (in the sense of Eqs. (6.4)) best overlaps T 0 n , T 0 n+1 versus P 0 n , P 0 n+1 , for n = 4, . . . , 11, and the three prototype cases A 0 , B 0 , C 0 . (As in Table III where appropriate we have used the P 4 3 -approximant instead of P 3 4 . Since the P 5 5 approximant has a pole in the region of interest the entries corresponding to n = 10 are blank and the entries corresponding to n = 9 are the overlaps P 0 9 , P 0 11 .) The last two Tables show very clearly that the P -approximants converge much better than the T -ones and that they provide a much more faithful representation of the signal. To measure the effectualness of our approximants (in the technical sense defined above) and study the biases they can introduce, we also performed numerical calculations in which we maximized over all parameters, say while keeping track of the parameter values m A 1 , m A 2 which, given the signal parameters m 1 , m 2 , maximize the overlaps. The results are presented in Table V for the three prototype cases A 0 , B 0 , C 0 and for the most important values (for the near future) of the order of approximation: n = 4, 5 and 6. In this case the overlaps are the minimax overlaps.
Our test mass results sum up the general behavior of the different approximants pretty well. First let us note that even at O(v 11 ) T -approximants do not achieve the requisite overlap of 0.965 except in the case of light binaries. This is consistent with the concern often expressed in the literature about the need for higher order post-Newtonian wave forms. In our view the most worrying aspect of the T -approximant is not that it does not obtain a high overlap but that the behavior of the approximant is oscillatory in nature. For instance, the O(v 6 ) T -wave form achieves an overlap, with the exact wave form, of about 0.96 which reduces at O(v 8 ) to as low as 0.71 for system B 0 and 0.85 for system C 0 (though for system A 0 it maintains a level of 0.965), increases at O(v 9 ) to about 0.93 for these systems and again drops back at O(v 11 ) to 0.85 and 0.90 for systems B 0 and C 0 , respectively. One clearly notices that P -approximants do not show such an erratic behavior. Recall that, in the test mass case, we are comparing a known exact wave form with an approximate signal model and hence the above conclusions are free from any prejudice. Though the second post-Newtonian P -wave form is not a faithful signal model, at 5/2 post-Newtonian order the P -approximant is a faithful signal model.
Moreover, P -approximants show an excellent Cauchy convergence as evident from Table IV. Notice that the Tapproximants have a poor Cauchy convergence for systems B 0 and C 0 . This makes them ill-suited as faithful templates. T -approximants are not always effectual signal models either. Sometimes they do obtain overlaps larger than 96.5% but at the cost of producing a very large bias in the estimation of total mass. This is to be contrasted with the P -approximants which are effectual at O(v 4 ) at the level of 99.7% or better at the cost of very little bias (δm/m always less than 3.5% and less than 1% in most cases). We have also computed the biases in the estimation of the parameter η and there too we see a similar trend.
VII. ROBUSTNESS
Up to this point in the paper we have mainly relied on the test mass limit to assess the quality of our approximants. In this section we shall try to go beyond this formal limit to check the robustness of our proposal under the turning on of η.
We can first use all the existing information about the comparable masses case and see whether turning on η modifies in any way the trend we saw above. As a first test (a "visual" one) we plot in Fig. 7 the Newton-normalized flux functions, F Tn (v; η), F Pn (v; η) as a function of v, for the maximal value η = 1 4 and for the cases where we know them, i.e. n = 2, 3, 4 and 5. Using the same information we can also check the η-robustness of our Cauchy-convergence criterion. This is done in Table VI where we present the semi-maximized best overlaps Eq. (B11) P η 3 (m, η), P η 4 (m, η) , P η 4 (m, η), P η 5 (m, η) and compare them to their T -counterparts for the (real) cases A, B and C. We also made many attempts at testing the robustness of our conclusions when taking into account the existence of (unknown) higher-order η-dependent corrections. There is no really conclusive way of achieving such a task but here is our best attempt: Our starting point is to model an infinite number of (unknown) higher-order P N corrections by just one (non perturbative) parameter: κ 0 . As introduced in Eq. (4.11) above, κ 0 parametrizes our ignorance about the true location of the light ring (pole in e(x) and F (v)). Our 2P N Padé estimates gave us an η-corrected value v pole , but we wish to consider here the possibility that maybe the true value is quite different from our estimate. More precisely Eq. (4.11) parametrizes the pole at 3x pole = (1 − κ 0 η) −1 , while 3x P4 pole ≃ 1.4312 for η = 1 4 corresponding to κ 0 ≃ +1.2051. To explore a very large range of possibilities we shall consider that the true value of κ 0 (for η = 0.25) might range between κ 0 = −1 (meaning 3x pole = 0.8) and κ 0 = +2 (meaning 3x pole = 2.0). In Table VII we compare the location of the the last stable orbit x lso ≡ v 2 lso predicted by the T and P -approximants to the energy function relative to the exact location x X lso . We see that P -approximants capture the location much better than the T -approximants.
Having chosen the range of κ 0 we shall consider, and adopting the definition (4.11) for the corresponding fiducial "exact" e-function, it remains to define a corresponding fiducial "exact" f -function, having the property that the corresponding F -function coincides, up to O(v 6 ) terms, with the known T 5 expansion of F . To this effect the simplest proposal is to define first the T 11 (Taylor to v 11 ) expansion of f κ0 X (v) by where A k (η), k ≤ 5, are given by Eq. (4.3), and the others (η = 0) by Eq. (4.2). Then we define the corresponding fiducial "exact" f -function by: Having defined some fiducial "exact" e and f functions we have correspondingly defined some "exact" wave form h κ0 X and, using the definitions above, both T -type and P -type approximants of this wave form. We are interested in knowing whether the P -approximants behave better than the T -ones even in presence of higher-order effects significantly different from the behavior expected from the 2PN Padé results. The results of this exercise are presented in Table VIII where one has computed the semi-optimized minimax overlaps P η n (m, η), X η κ0 (m, η) and T η n (m, η), X η κ0 (m, η) for the cases A, B, C, for κ 0 = −1, 1.2051 and 2 and for n = 4, 5, 6 and 7. In order to test the effectualness of the approximants, in Table IX we quote the fully-optimized but minimax overlaps P η n (m, η), X η κ0 (m, η) and T η n (m, η), X η κ0 (m, η) again for the cases A, B, C, for κ 0 = −1, 1.2051 and 2 and for n = 4, 5, 6 and 7. From Table VIII we clearly see that T -approximants fail to be faithful signal models even at the third post-Newtonian order. The second post-Newtonain wave form of this family would clearly fail to capture even 20% of all potential NS-NS events that would be detectable with the aid of a family of templates constructed out of Papproximants. Even when parameter values are extreme (κ 0 = −1, and very high masses) the presently available 5/2 post-Newtonian energy and flux functions are sufficient to construct a faithful P -approximant.
We observe that except when the parameter values are extreme (very low value of κ 0 and high masses) O(v 5 ) P -approximants are indeed good effectual signal models. In fact in all cases, but one, they obtain an overlap in excess of 99%. Bias in the estimation of the total mass is at worst 7.6% and in many cases it is below 2%. On the contrary standard second post-Newtonian approximants are not effectual in many cases; when they are effectual they often produce a relatively large bias. For instance, for system B, when κ 0 = 47/39, second post-Newtonain T -approximant acquires an overlap of 0.98 compared to 1.00 acquired by the P -approximant of the same order. However, the bias is 97% in the former case as compared to a tiny 1.1% in the latter case. Similarly, for κ 0 = 2, the 2.5 post-Newtonian T -approximant achieves an overlap of 0.988 at a bias of 75% while the P -wave form achieves 0.996 overlap with practically no bias at all. The biases in the estimation of the η-parameter (not shown) are also pretty small when P -approximants are used as compared to T -approximants.
A word of caution is in order for those who desire to use standard post-Newtonian templates: A careful examination of the above Tables reveals that the 2.5 post-Newtonian T -approximant systematically obtains poorer overlaps and larger biases. This is of course related to the fact that the 5/2 post-Newtonian flux is very badly behaved (cf. Fig.3). Hence one must never employ 2.5 post-Newtonian T -approximant for searches. However, P -approximants do not suffer from this predicament. Indeed at O(v 5 ) P -wave form is an excellent effectual signal models. For all systems and parameters this model obtains an overlap of better than 99.5% at a bias less than 1.5%.
VIII. CONCLUSIONS
In this work we have studied the convergence properties of various post-Newtonian templates to detect gravitational waves emitted by inspiralling compact binaries consisting of neutron stars and/or black holes. We have shown that the standard post-Newtonian filters, referred to as the T -approximants that are based on Taylor series, considered in the literature define a badly convergent sequence of approximants. Even at order v 11 the T -approximants only provide overlaps ∼ 0.86 with the exact signal in the case of binaries consisting of 1.4-10 M ⊙ systems. Worse, the convergence of the sequence of T -approximants is oscillatory rather than monotonous. Our results on T -approximants confirm previous, less convincing arguments in the literature, which were either based on rough quantitative estimates, or on numerical calculations based on the stationary phase approximation for Fourier transforms -an approximation that we have shown not to be sufficiently accurate for this purpose (see Table II).
We have defined a new sequence of approximants, referred to as the P -approximants, based on two ingredients: (i) the introduction, on theoretical ground, of two new energy-type and flux-type functions e(v) and f (v), instead of the conventionally used E(v) and F (v) and (ii) the systematic use of Padé approximation for constructing successive approximants of e(v) and f (v). The new sequence of P -approximants has been shown to exhibit a systematically better convergence behavior than the T -approximants. The overlaps they achieve at a fixed post-Newtonian order are usually much higher, and the convergence is essentially monotonous instead of oscillatory (as pictorially described in Fig 1 and mathematically measured by the overlaps quoted in Tables III, V, VIII, and IX). From our extensive study of the formal "test-mass limit" η ≡ m 1 m 2 /(m 1 + m 2 ) 2 ⇒ 0, i.e. keeping overall η-factors but neglecting η in the coefficients of the post-Newtonian expansions, it appears that the presently known O(v/c) 5 -accurate post-Newtonian results allow one to construct approximants having overlaps larger than 96.5% (overlaps corresponding to κ 0 = 47/39, 2 in Table IX and all, but one, overlaps in Table IX) with the exact signals. Such overlaps are enough to guarantee that no more than 10% of signals may remain undetected. By contrast (v/c) 5 -accurate T -approximants only give overlaps of 50%, and sometimes even as low as 30%, corresponding to a loss of 87.5% and 97% events, respectively. Our results are summarized in Fig. 8 where we have plotted the fraction of events which the templates constructed out of T and P -approximants would detect relative to the total number of events that would have been detectable if we have had access to the true signal. We clearly notice the superiority of the P -approximants. Moreover, our computations indicate that the new templates entail only acceptably small biases in the estimation of signal parameters (see Tables V and IX). In the terminology introduced in the text, P -approximants are both more effectual (higher fully maximized overlaps), and more faithful (smaller biases) than the usual T -approximants. The above conclusions are primarily based on the study of the formal test-mass limit and assumes that turning on η brings only a smooth deformation of what happens at η ⇒ 0. We have also studied the effect of turning on η (η = 0) in the coefficients of the post-Newtonian expansions. From all our checks it seems that the η-dependence is indeed smooth and should not alter the fact the P -approximants have a better convergence than the T -ones. Our construction predicts that the last stable circular orbit is closer (i.e. greater orbital period) when η = 0 (see Eq. (3.23). This is good news because it improves the efficiency of P -approximants to be used as filters for detectors having a fixed frequency band. However, we have no independent confirmation of this (favorable) dependence on η. We have tested the robustness of our conclusions against possible very drastic changes brought by (still unknown) η-dependent terms in the higher post-Newtonian coefficients. In the case where these extreme changes go in the opposite direction of what is suggested by presently known results (i.e. in the case κ 0 = −1), we find that the overlaps are worsened compared to our best estimate range (κ 0 = 47/39). This shows that it is important to extend the presently available O(v 5 ) post-Newtonian results to the third post-Newtonian level (notably for the equations of motion) [25]. This will allow one to check whether the η-dependence of the 2.5 post-Newtonian results that we use is typical of the higher terms (as our method assumes) or exhibit some abnormal behavior for some unforeseeable reason. When third post-Newtonian results are available it is still advisable to use the P -approximants: they have consistently higher overlaps and lower biases (cf. see Table IX).
In this study we have only considered the noise power spectral density corresponding to initial LIGO interferometers. Naturally, one must study other cases as well. Based on the current study we can be confident that in all cases the P -approximant wave forms will fare much better compared to the standard post-Newtonian ones. However, their performance in absolute terms needs to be re-assessed since other interferometers, such as GEO600, VIRGO, and enhanced LIGO, happen to have effective bandwidths and the frequency of maximum sensitivity somewhat different from initial LIGO. In addition, one must also address the performance of P -approximate wave forms with regard to parameter estimation.
ACKNOWLEDGMENTS
It is a pleasure to thank Eric Poisson for providing the numerical test mass flux. BRI thanks the Institut des Hautes Etudes Scientifiques , University of Wales Cardiff and the Albert Einstein Institute, Potsdam, while BSS thanks the Raman Research Institute and Institute des Hautes Etudes Scientifique for hospitality during different phases of this work. This work was supported in part by NSF grant PHY-9424337. BSS thanks Kip Thorne and his group for useful conversations.
APPENDIX A: PADÉ APPROXIMANTS
A Padé approximant to the truncated Taylor series expansion of a function is a rational polynomial with the same number of coefficients as the latter. The coefficients of the Padé approximant are uniquely determined by reexpanding the Padé approximant to the same order as the truncated Taylor series and demanding that the two agree. In our study we use a continued fraction form of the (near diagonal) Padé approximant instead of the usual rational polynomial.
Let S n (v) = a 0 + a 1 v + · · · + a n v n be a truncated Taylor series. A Padé approximant of the function whose Taylor approximant to order v n is S n is defined by two integers m, k such that m + k = n. If T n [· · ·] denotes the operation of expanding a function in Taylor series and truncating it to accuracy v n (included), the P m k Padé approximant of S n is defined by where N m and D k are polynomials in v of order m and k respectively. If one assumes that D k (v) is normalized so that D k (0) = 1; i.e. D k (v) = 1 + q 1 v + · · ·, one shows that Padé approximants are uniquely defined by (A1). Note that, trivially, P n 0 [S n ] ≡ S n which indicates that Padé approximants are really useful when k = 0. Actually, it seems that in many cases the most useful Padé approximants are the ones near the "diagonal", m = k, i.e. P m m if n = 2m is even, and P m+1 m or P m m+1 if n = 2m + 1 is odd. In this work we shall use, except when specified otherwise, the diagonal (P m m ) and the "subdiagonal" (P m m+1 ) approximants. For instance, the P 3 4 -approximant of the flux function has a pole and therefore we use instead the P 4 3 -approximant. The diagonal (P m m ) or subdiagonal (P m m+1 ) Padé approximants can be conveniently written in a continued fraction form (see e.g. [45]). For example, given one looks for and given one looks for On the other hand, where In geometrical terms, p α describes, as α varies, and ellipse in the X-plane (the projection of the circle e α = cos α e 1 + sin α e 2 ) and the maximum projection onto the X-plane corresponds to the semi-major axis. The square p α 2 = p α , p α reads where Maximizing over α is now easy (using cos 2 α = (1 + cos 2α)/2, sin 2 α = (1 − cos 2α)/2, 2 sin α cos α = sin 2α and maximizing over 2α) and yields finally Inserting the definitions of the orthonormalized vectors Eq. (B4) into the definitions Eq. (B10) of A, B and C, can allow one to express (cos θ AX ) max only in terms of various scalar products of the initial vectors It is easily checked that the final answer does not depend on the choice of basis in the A-and X-planes, and can (if wished) be expressed only in terms of the "2-forms" ω A ≡ h A 1 ∧ h A 2 and ω X ≡ h X 1 ∧ h X 2 (and of the Euclidean structure of W).
The result Eq. (B11) gives the best possible overlap when optimizing separately over the phases of the exact and approximate signals. This gives the mathematical measure of the closeness of the two wave forms. However, in practice we do not have access to the phase of the exact signal. It might happen that the latter phase, i.e. equivalently the angle β, is not optimum. Therefore, a physically more relevant measure of the closeness of the two wave forms (especially for the purpose of detection) is obtained by first optimizing over α (the parameter we can dial) and then considering that β has the worst possible value. In terms of the geometric reasoning given above one finds that the worst possible case corresponds to the semi-minor axis of the ellipse given by Eq. (B9), i.e.
In our simulations we considered both measures of the closeness of the two signals. We use Eq. (B11) when we study the mathematical convergence and we use Eq. (B12) when we are interested in the detection. We shall refer to Eq. (B11) as the best overlap and Eq. (B12) as the minimax overlap.
then the corresponding coefficients F k are given by F k = A k , k = 0 · · · 5 ; F k = A k + B k ln v lso , k = 6 · · · 11. The ℓ k are as given by Eq. (4.9) [37] With the conventiong(f ) ≡ dte 2πif t g(t). Overlap integrals of a test mass wave form whose Fourier transform is computed using stationary phase approximation with the same wave form but whose Fourier transform is computed using numerical fast Fourier transform. n stands for the order of the approximant with X denoting the exact wave form. T 0 n , X 0 P 0 n , X 0 T 0 n , X 0 P 0 n , X 0 T 0 n , X 0 P 0 n , X 0 VII. Location of the last stable circular orbit determined by the T -and P -approximants in the finite mass case for different values of the parameter κ0. At order v 2 the last stable orbit is not defined by P -approximants. At orders v 4 and beyond the P -approximants predict the location of lso pretty well. Robustness of the T -and P -approximants in the comparable mass case: Faithfulness. Values quoted are the minimax overlaps together with the best possible overlaps, Eq. (B11), in parenthesis. System D corresponds to a binary consisting of stars of masses 20 M⊙ and 1.4 M⊙. In this extreme mass ratio case the P -approximants at O(v 5 ) are NOT faithful (overlaps < 96.5%).
n A B C D T η n , X η P η n , X η T η n , X η P η n , X η T η n , X η P η n , X η T η n , X η P η n , X η κ0 = −1 Robustness of the T -and P -approximants in the comparable mass case: Effectualness. Values quoted are the minimax overlaps Eq. (B12) together with the percentage bias in the estimation of total mass 100 1 − m A /m in parenthesis.
n A B C T η n , X η P η n , X η T η n , X η P η n , X η T η n , X η P η n , X η κ0 = −1 Newton-normalized energy functions in the comparable mass case. We compare the convergence of the T -approximants and P -approximants. Observe that the P -approximants converge much faster to the fiducial exact energy than the standard approximants. | 2017-05-08T23:50:02.857Z | 1997-08-18T00:00:00.000 | {
"year": 1997,
"sha1": "0225df14aa5d54029ea64121058533aa9dbebc87",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/5548/1/DAMprd98.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6064cde23e832039a0770944fa059bd3df467ecf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119197908 | pes2o/s2orc | v3-fos-license | Revisiting Jovian-Resonance Induced Chondrule Formation
It is proposed that planetesimals perturbed by Jovian mean-motion resonances are the source of shock waves that form chondrules. It is considered that this shock-induced chondrule formation requires the velocity of the planetesimal relative to the gas disk to be on the order of>7 km/s at 1 AU. In previous studies on planetesimal excitation, the effects of Jovian mean-motion resonance together with the gas drag were investigated, but the velocities obtained were at most 8 km/s in the asteroid belt, which is insufficient to account for the ubiquitous existence of chondrules. In this paper, we reexamine the effect of Jovian resonances and take into account the secular resonance in the asteroid belt caused by the gravity of the gas disk. We find that the velocities relative to the gas disk of planetesimals a few hundred kilometers in size exceed 12 km/s, and that this is achieved around the 3:1 mean-motion resonance. The heating region is restricted to a relatively narrow band between 1.5 AU and 3.5 AU. Our results suggest that chondrules were produced effectively in the asteroid region after Jovian formation. We also find that many planetesimals are scattered far beyond Neptune. Our findings can explain the presence of crystalline silicate in comets if the scattered planetesimals include silicate dust processed by shock heating.
INTRODUCTION
Chondrites reserve information about the early stage of the solar system that had been lost in the planets themselves. Spherical chondrules, grains 0.1 mm to 1 mm in size composed of the silicates found in meteorites, are considered to have been formed from precursor particles that were heated and melted in flash heating events. These then cooled and resolidified in a short period of time (∼ hours) in a protoplanetary disk (e.g., Jones et al. 2000). It is believed that the efficiency of chondrule formation was high because chondrules are major constituents in chondritic meteorites. So far, various mechanisms for chondrule formation have been proposed; however, it has not yet been concluded which process is predominant. One mechanism for chondrule formation that has received considerable attention is heating by shock waves (Hood & Horanyi 1991, 1993Boss 1996;Jones et al. 2000). The heating process of chondrule precursors by shock waves has been investigated in detail in previous studies. It has been shown that the shock-wave heating model satisfies various constraints related to chondrule formation such as the peak temperature (∼ 2000 K) and short cooling time (Hood & Horanyi 1991, 1993Iida et al. 2001;Ciesla & Hood 2002;Miura, Nakamoto, & Susa 2002;Desch & Connolly 2002;Miura & Nakamoto 2005, 2006. As plausible sites for such shock waves to occur, highly eccentric planetesimals have been proposed (Hood 1998;Weidenschilling, Marzari, & Hood 1998). If the relative velocity between a planetesimal and the gas disk exceeds the speed of sound of the gas, a bow shock is produced. Weidenschilling et al. (1998) suggested that Jovian meanmotion resonances excite planetesimals. The formation of Jupiter in the gas disk induces resonances and strongly affects the motion of planetesimals around the asteroid belt (∼ 2 AU-5 AU). The evolution of such planetesimals under the influence of gas drag has also been studied in detail (Ida and Lin 1996;Marzari et al. 1997;Weidenschilling et al. 1998;Marzari & Weidenschilling 2002). The works showed that planetesimals migrate toward the sun due to gas drag even if their radii are on the order of 1000 km. During the migration from 4 AU to 3 AU, the eccentricities of the planetesimals are excited by multiple Jovian mean-motion resonances. The excited planetesimals can acquire further eccentricity up to about e ∼ 0.4 (∼ 6kms −1 ) during the trapping in the 2:1 resonance (∼ 3.3AU), provided Jupiter has an eccentricity larger than e 0.03. The excitation of eccentricity increases the gas drag, and the eccentricity and semi-major axis are quickly damped as a result. The orbits of many planetesimals are circular before they reach 2 AU. Marzari & Weidenschilling (2002) showed that the velocity of 100 km-300 km-sized planetesimals relative to the gas disk reaches a maximum of 8 kms −1 (e ∼ 0.6 at 2 AU). Planetesimals with such high velocities are rare and the period in which these velocities are achieved is limited to on the order of 10 4 yr. Simulations of gravitationally interacting planetesimals suggest that the process is not very efficient, i.e., the area swept by chondruleforming shocks over a period of 1 Myr-2 Myr is just 1% and the planetesimals need to be about half the size of the Moon to accumulate the speed required for chondrule formation (Hood & Weidenschilling 2012).
Chondrule formation induced by shock waves requires the relative velocity to be on the order of 7 kms −1 for a partial melt of submillimetersized dusts in a gas disk with a density of ρ ∼ 10 −9 gcm −3 (e.g., Hood 1998;Iida et al. 2001;Desch & Connolly 2002). Although the maximum velocities of planetesimals obtained in the previous simula-tions of planetesimal evolution in resonances suggest that chondrule formation by bow shocks is likely, the highest speeds obtained ( 8 kms −1 ) are rare and only marginally achieve efficient formation. Furthermore, in the asteroid belt of the minimum-mass disk (ρ ∼ 10 −10 gcm −3 ) where the resonances exist, a larger relative velocity 10 kms −1 would be preferable to ensure complete melting of the 1 mm-sized dust. If the planetesimals can achieve a relative velocity higher than 10 kms −1 more frequently during orbital migration, the ubiquitous existence of chondrules could be explained more satisfactorily.
In previous works regarding planetesimals in resonances, the gravity of the gas disk and planets other than Jupiter was neglected, i.e., the effect of secular resonances was neglected. However, as we described in the previous paragraph, the effective shock-heating of chondrules requires a relatively dense gas disk, at least on the order of that of the minimum-mass disk. Such a gas disk provides not only the drag force but also the gravitational force and causes secular resonance. The gravitational potential of the disk precesses the Jovian pericenter. When the precession rate coincides with the precession rate of the planetesimals, a secular resonance arises, which enhances the eccentricities of the planetesimals. Such a secular resonance occurs between 2 AU and 4 AU in a disk of density ∼ 0.1-5 times that of the minimum-mass disk (e.g., Heppenheimer 1980;Lecar & Franklin 1997;Nagasawa, Tanaka, & Ida 2000;Nagasawa, Ida, & Tanaka 2001. Even if the resonance is not sweeping, its existence causes a high-amplitude oscillation of the eccentricity and further excitation of the relative velocity in the vicinity of the secular resonance. In chondrule formation induced by planetesimal bow shocks caused by Jovian perturbation, secular resonance inevitably occurs and plays an important role. Planetesimal bow shocks may also contribute to the origin of the crystalline silicate in comets. The presence of crystalline silicate in comets has been confirm though infrared observations of dust grains in a number of cases (Bregman et al. 1987;Molster et al. 1999;Hanner & Bradley 2004). It is thought that crystalline silicate is formed in the protoplanetary disk because the silicate dust in the interstellar medium is almost entirely amorphous (Li, Zhao, & Li 2007). Experimental studies on the thermal annealing of amorphous silicate show that the formation of crystalline silicates requires temperatures above 800 K (Hallenbeck, Nuth, & Daukantas 1998). In contrast, the composition of the gas in cometary comae indicates the preservation of interstellar ice in the cold outer nebula (Biermann, Giguere, & Huebner 1982). It is unclear why these two materials, which have contradicting heating records, co-exist in comets (see Yamamoto & Chigai 2005;Tanaka, Yamamoto, & Kimura 2010;Yamamoto et al. 2010). It is possible that the amorphous silicates crystallize through shock heating. As shown later, a number of planetesimals are likely scattered by the Jupiter resonances. This mechanism may explain the incorporation of both high-and low-temperature materials in comets.
In this letter, we study the evolution of planetesimals in the gas disk, including the effect of secular resonances caused by the gas disk potential, and determine whether the relative velocity is sufficient for chondrule formation. In Section 2, we briefly describe the setup of the numerical simulations and present results. We show that secular resonance excites most of the planetesimals up to e 0.6 (v rel 12 kms −1 ). Finally, we summarize and discuss our results in Section 3.
NUMERICAL RESULTS
We investigate the orbital evolution of test particles perturbed by Jupiter and the disk using the time-symmetric fourth-order Hermite code which has an advantage in the precise long-term calculations of pericenter evolution and detection of close encounters (e.g., Kokubo & Makino 2004 and references therein). The protoplanetary disk provides a background gravitational field that induces secular resonance and gas drag. We use a thin disk potential for the minimum-mass disk as described by Ward (1981). The gap in the disk where Jupiter exists is neglected. We perform simulations both with and without the disk potential and compare the results. When we include the disk potential, the change in rotational speed of the gas disk due to its own gravity is taken into account. We assume planetesimals 300 km in size with a material density of ρ mat = 3 gcm −3 to calculate the gas drag. From the bow shock simulations of chondrule formation, it was shown that larger planetesimals ( 1000 km) is required to account for the thermal history, particularly cooling rates (Ciesla, Hood, & Weidenschilling 2004;Boley, Morris, & Desch. 2013). It is shown by previous studies of the planetesimal evolution in the gas disk that the larger planetesimals obtain larger eccentricities because of the weakened gas drag. In this letter, we select 300 km planetesimals to show the required relative velocity for the chondrule melting is achieved under the gas drag. We tested other sizes (100 km, 300 km, 500 km, and 1000 km) and confirmed that our conclusion hardly changes.
We adopt the gas drag force given by Adachi, Hayashi, & Nakazawa (1976). The specific characteristics of the gas drag have little effect on the maximum eccentricity of the planetesimals (Marzari & Weidenschilling 2002). We set the drag coefficient, which varies with the Mach number and the Reynolds number (Tanigawa et al. in prep), but its choice is not essential in our simulations.
We use the current size, eccentricity, and semimajor axis of Jupiter, which are 1M J , e = 0.048 and a = 5.2 AU, respectively. With such parameters, the secular resonance caused by the minimum-mass disk occurs at around 3.2 AU. We start our simulations putting planetesimals at 4.1 AU, which is just outside the 3:2 resonance region. When we start simulations from inside of 3AU, the eccentricity is kept smaller than 0.2 in the case of planetesimals larger than 100 km in size regardless of whether the disk potential is included. Accordingly, the planetesimals hardly migrate since their migration timescales exceed the life-time of gas disk.
The trajectories of 20 planetesimals 300 km in size with different initial orbital angles are plotted in Figure 1. The evolutions in which the disk's self-gravity is included are shown in the left panel and the evolutions without the disk potential are shown in the right panel.
The planetesimals migrate inwards due to the gas drag. As already shown in previous works, the resonances between 3 AU and 4 AU increase the eccentricities of migrating planetesimals. Since the 3:1 resonance at ∼ 2.5 AU is separated from the other resonances, the eccentricity is damped before the 3:1 resonance is reached as shown in the right panel. On the other hand, in the left panel, the increase in eccentricity continues until the 3:1 resonance due to the extra excitation caused by the secular resonance.
The typical velocity of the planetesimals relative to the gas disk can be estimated using v rel ∼ eV Kep , where V Kep is the Kepler velocity at that semimajor axis a and e is its eccentricity (Adachi et al. 1976). Note that eV Kep is the redial veloc-ity at the location of r = a (where r is the distance from the star). In the case of low eccentricity, the relative velocity eV Kep corresponds to the maximum value and the smallest relative velocity (1/2 eV Kep ) occurs at its pericenter and apocenter. In high eccentricity cases (e 0.5), the actual maximum relative velocity is achieved between r = a location and the pericenter and its magnitude is enlarged from eV Kep . In Figure 1, the relative velocity (eV Kep ) is shown by dotted gray lines. When the disk potential is not considered, the maximum speed rarely exceeds 10 kms −1 ; when it is considered, however, the maximum speed exceeds 10 kms −1 for all planetesimals. Figure 2 shows typical evolutions of the semimajor axes and eccentricities for two cases: including the disk potential (red lines) and excluding the disk potential (blue lines). The planetesimals start migration when their eccentricities are pumped up by Jupiter. Since the initial location of the planetesimals is near the chaotic resonanceoverlapping region, the period for which the planetesimals remain near 4.1 AU depends on cases. When they migrate to the 2:1 mean-motion resonance (∼ 3.3 AU), they become trapped in it. In the case of Figure 2, the oscillations during 2.35 Myr-2.4 Myr (blue line) and 2.6 Myr-2.8 Myr (red line) correspond to such trapping. In our 20 simulations, the time trapped in the resonance tends to be longer when the disk potential is considered. When their eccentricities are excited to e ∼ 0.4, the planetesimals become detached from the resonance as the discussed in previous papers (e.g., Marzari & Weidenschilling 2002). If there is no secular resonance between 2 AU and 3 AU, planetesimals continue rapid migration due to the gas drag until their orbits become circular. If secular resonance is taken into account, however, the migration is again halted temporarily at the location of the resonance (∼2.8 Myr). The eccentricities are excited further, but with such high eccentricities, the gas drag drives away the planetesimals from the secular resonance at around e ∼ 0.7. The 3:1 resonance (a = 2.5 AU) does not play as important a role as that of the 2:1 resonance or the 3:2 resonance, but with high eccentricity, it can further increase the eccentricity by a small amount (∆e 0.1). The region where the maximum speed tends to be recorded is the location of the 3:1 resonance. 1 values, we count e max when the planetesimals are within 4.5 AU. The groups of lower eccentricities in the bimodal distributions in Panel a tend to correspond to the planetesimals that encountered Jupiter. The planetesimals remaining in the asteroid region reach high velocities due to the secular resonance, not due to the Jovian encounters.
The figure reveals that the typical maximum eccentricity with secular resonance is ∆e ∼ 0.15 higher than that obtained when only mean-motion resonances are considered. The evolution toward the 2:1 resonance basically follows a line of apocentral distance a(1 + e) ∼ 4.5 AU. When the eccentricity excitation stops near the 2:1 resonance, e max is ∼ 0.4; the peak near e max ∼ 0.4 (blue distribution in Figure 3) originates from this fact. About 1/3 of the planetesimals drop out of the 2:1 resonance in the case without the disk potential. On the other hand, in the case with the disk potential, all planetesimals that are not scattered by Jupiter continue on their trajectory until reaching the 3:1 resonance. The peak at e max ∼ 0.7 corresponds to this state. At the location of the 3:1 resonance, V Kep ∼ 30 kms −1 (a/AU) −1/2 ∼ 20 kms −1 . Thus, the typical maximum of the relative velocity is approximately given by 20 × e kms −1 . The difference between the two cases is ∼ 3 kms −1 (∆e ∼ 0.15).
Unlike the eccentricities, inclinations (i) remain lower than 10 • in most cases. It is because a strong secular resonance of Jupiter which excites the inclination (ν 15 ) hardly occurs in the gas disk within Jovian orbit (Nagasawa et al. 2000(Nagasawa et al. , 2001(Nagasawa et al. , 2002. Although planetesimals with i = 10 • go out of the gas disk, they stay longer than 1/5 of their orbital period within one scale height of the disk, where the gas density is comparable to that at the disk midplane. The planetesimals beyond the a(1+e) ∼ 4.5 AU line enter into a region of resonance-overlapping and they are scattered by close encounters with Jupiter. About half of the 300 km-sized planetesimals are scattered in both cases. This fraction would be lower for smaller planetesimals due to the stronger gas drag. One out of 15 scattered planetesimals returns to the region of a < 4.5 AU, but the majority eventually reach e 1 and leave orbit (Fig. 4).
DISCUSSION AND CONCLUSIONS
We studied the evolution of planetesimals in the gas disk, including the effect of secular resonance caused by the gas-disk potential. We found that the planetesimals attain e ∼ 0.6 with the help of this secular resonance. The relative velocity of the planetesimals exceeds 12 kms −1 around the 3:1 mean-motion resonance. The high-eccentricity region is restricted to a relatively narrow band of 1.5 AU-4 AU. In previous studies on the planetesimal evolution in the gas disk, the maximum velocity was found to be ∼ 8 kms −1 (e ∼ 0.6 at 2 AU) and such supersonic speeds were not frequent events (Marzari & Weidenschilling 2002). In our simulations with secular resonance, however, about the half of planetesimals reached e ∼ 0.6.
Our results are supportive of the possibility of chondrule formation induced by planetesimal shock waves due to Jovian resonance. The minimum relative speed required for melting 1 mmsized dust at 1 AU is considered to be ∼ 7 kms −1 in the minimum-mass disk (e.g., Hood 1998;Iida et al. 2001;Desch & Connolly 2002). The typical e max of ∼ 0.65 around 3 AU to 2 AU corre- sponds to a velocity of 11 kms −1 -14 kms −1 relative to chondrule precursors rotating with the disk. The density of this region is 2×10 −10 gcm −3 -7 × 10 −11 gcm −3 . With this disk gas density, the formation of 0.1 mm chondrules by shock waves requires ∼ 10 kms −1 -18 kms −1 (Iida et al. 2001), while ∼ 7 kms −1 is sufficient in 10 times denser regions. If the speed exceeds 20 kms −1 , even a 1 cm precursor evaporates, but such speeds are not realized. Note that the highly supersonic situation ( 12kms −1 ) is restricted to the 1 AU-3 AU region within the stable region of a 4.5AU. That would suggest that the chemical or taxonomic evolution of planetesimals may depend on the semi-major axis via the gas-disk density and the maximum heating caused by bow shocks.
Our results relate to the origin of crystalline silicates observed in a number of comets. In our simulations, about half of the planetesimals attain e 1 and move toward the outer region of the disk due to Jovian resonances. It was shown that such eccentric icy planetesimals with core-mantle structures are changed to rocky planetesimals due to the efficient evaporation of the icy mantle caused by shock heating, even in the region outside the snow line (Tanaka et al. 2013). As long as the dry planetesimals are transported in the cometary region, there is no conflict with the fact that cold interstellar ice is preserved in the comae. On the way to the outer region, the planetesimals are expected to accrete the silicate dust processed by the shock waves. This process can explain the presence of crystalline silicates in comets, if the scattered planetesimals containing crystalline silicates become mixed with icy outer planetesimals.
In this letter, we considered 300 km-sized planetesimals in the minimum-mass gas disk. If we considered smaller planetesimals ( 100 km), the maximum velocities would have been smaller as a result of stronger gas drag. On the other hand, the maximum velocity did not differ greatly to that for larger planetesimals ( 100 km); however, the ejection frequency would be enhanced due to weakened gas drag in such a case. The maximum speed of planetesimals depends on their mass, the eccentricity, and the semi-major axis of Jupiter through the strength of the secular resonance. The effect of the secular resonance starts to come into play when Jupiter exceeds ∼ 1/3 of its current mass, and the effect continues until the | 2014-09-16T02:32:11.000Z | 2014-09-16T00:00:00.000 | {
"year": 2014,
"sha1": "3d596e458cefb579b14dfab3957f339d7034b6d6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.4486",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dcca06da51b1468668544516e996edb0151902ba",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
55760522 | pes2o/s2orc | v3-fos-license | Energy-Based Controller Design of Stochastic Magnetic Levitation System
This paper investigates the control problem of magnetic levitation system, in which velocity feedback signal is influenced by stochastic disturbance. Firstly, single-degree-freedom magnetic levitation is regarded as an energy-transform action device. From the view of energy-balance relation, the magnetic levitation system is transformed into port-controlled Hamiltonian systemmodel. Next, based on the Hamiltonian structure, the control law of magnetic levitation system is designed by applying Lyapunov theory. Finally, the simulation verifies the correctness of the proposed results.
Introduction
Magnetic levitation system is a class of typical nonlinear system, which is difficult to establish accurate mathematical model for the natural parameter of electromagnetic part dependent times [1,2].Normally, a standard magnetic levitation system consists of four parts: the sensors, the controller, the power amplifier, and the electromagnetic drives.This kind of system is a control system and its control objective is to ensure that the soliquoid is in the normal position through a series of feedback and control activities.The general process is as follows: firstly, after comparing the given signal with the feedback signal, they will be passed along regulating circuit to power amplifier circuit; then the control currents flow through the power amplifier; finally, the electromagnet converts it to electromagnetic force to control the suspension position.Shu et al. in [3] introduced the working principle of the magnetic levitation system.Combined with the classical theory of dynamics and electromagnet, the nonlinear equations of motion of the system are derived by the general form of Lagrange equation, and after the linearization processing around the operating point, the state space description of the system is obtained.
In addition, the systems are always affected by stochastic disturbance in many practical control problems, which always lead to system instability [4].Therefore, more and more scholars and experts pay attention to the research of stability and control of stochastic nonlinear systems and a lot of research achievements have been made that greatly promoted the development of the system research [5][6][7][8][9].In [9], the nonlinear stochastic H ∞ control of Itô-type differential systems with all the state, control input, and external disturbance is studied and a sufficient condition is given for the finite/infinite horizon H ∞ control of such a system by means of Hamilton-Jacobi inequality.More recently, some progress has been made toward solving the stability analysis and controller design for stochastic Hamiltonian systems [10][11][12].Sun and Peng in [12] studied the robust adaptive control problem for a class of time-delay stochastic Hamiltonian systems.An uncertainty-independent adaptive control law is designed to guarantee that the closed-loop Hamiltonian system is robustly asymptotically stable in the mean square.
Due to the highly nonlinear characteristics of the system, the controller design problem will be solved based on Hamiltonian energy theory.In fact, the energy-based Hamiltonian system method has been widely used in practical systems control [11,[13][14][15][16][17][18][19].Based on the Hamiltonian system theory, the port-controlled Hamiltonian dissipative model of magnetic levitation system was established in [19], by using the Hamiltonian function as the storage function and the controller was designed, which is simple and easy to implement.A key feature of the systems, which is useful for stability analysis and stabilization, is that Hamiltonian function in a port-Hamiltonian systems can be used as a Lyapunov function which brings great convenience.
As is known, the controllers and regulators of the systems are always unavoidably affected by stochastic disturbances, and the study of the controlled systems with stochastic disturbance is of practical significance.Different from what have been studied, this present paper deals with the controller design problem of the magnetic levitation system with stochastic disturbances.Current efforts have been made to dispose the control problem of the stochastic magnetic levitation system on the basis of Hamiltonian energy theory.We regard the magnetic suspension as the energy conversion device and then derive a mathematical model of the single degree of freedom stochastic magnetic levitation system from the point of energy balance, which is transformed into a portcontrolled Hamiltonian system.Consequently, the controller of the stochastic magnetic levitation system is designed.Finally, a simulation example is given to verify the validity of the results.
The rest of this paper is organised as follows.Section 2 provides the problem formulation, the Hamiltonian modeling process of the stochastic magnetic levitation systems, and some preliminaries.Section 3 gives the main results.A simulation example is worked out in Section 4 to illustrate the results.Section 5 draws the concluding remarks.
Notations.‖ ⋅ ‖ stands for either the Euclidean vector norm or the induced matrix 2-norm.A function () ∈ C 2 means that () is a twice differentiable continuous function.The notation ≥ (resp., > ), where and are symmetric matrices, means that the matrix − is positive semidefinite (resp., positive definite). max () ( min ()) denotes the maximum (minimum) of eigenvalue of a real symmetric matrix .Throughout the paper, the superscript "T" stands for matrix transposition.In addition, for the sake of simplicity, we denote / by ∇.
Problem Description and Transformation
The physical model of the magnetic levitation train system, which includes the concentrated mass of train carriages (together with the supporting magnet) suspended on the rigid lead rail is shown in Figure 1, where is the quality of train carriage (including the supporting magnet); is gravitational constant; is the gap between the supporting magnet and the guide rail; 0 is the gap between the guide rail and the reference plane; is the distance between the supporting magnet and the reference plane; = 0 + ; () is the self-inductance of magnetic coil, which depends on the gap ; is the current flowing through magnet spool; is the coil resistance; is the voltage at both ends of the magnet spool.
By invoking Kirchoff 's voltage law and Newtons second law, the dynamic equations of the magnetic levitation system can be obtained by taking the vertical upward direction as the positive direction: where Φ = () is the magnetic flux and (, ) is the force created by the electromagnet, which is given by Here we regard the flux Φ as the independent variable; then (1) can be further transformed into the following forms: where = 0 2 /4, 0 is the permeability of vacuum, is coil turns, and is the effective pole area of the electromagnetic coil.
To obtain a port-controlled Hamiltonian model, we take a suitable approximation for the inductance is () = 2/( − ).As is known, the speed of the rigid body will be affected by stochastic disturbances during the operation of the magnetic levitation system.Let = [Φ, , θ ] T = [ 1 , 2 , 3 ] T ; due to the influence of stochastic disturbance, the magnetic levitation system (3) can be modeled as the following algebraic differential equations: where () is an independent standard Wiener process and satisfies {d()} = 0 and {d 2 ()} = d and is the expectation operator.The objective of this paper is to find a feedback control law as to ensure that the stochastic magnetic levitation system (4) with the controller ( 5) is asymptotically stable in the mean square.
Obviously, system (4) is a nonlinear system.In order to study the control problems of the stochastic magnetic levitation system in view of the energy balance, we need to convert it into a stochastic Hamiltonian system first.Taking the total of electromagnetic energy and mechanical energy as the Hamiltonian function, that is, then the magnetic levitation port-controlled Hamiltonian system is obtained: where According to the equilibrium condition of the system, the speed of the rigid body reduced to zero when the system is stable.Meanwhile, the electromagnetic force of the rigid body is equal to the gravity that acting upon on it.Then, we can get * 1 = √4, * 3 = 0. Therefore the equilibrium point of the system is * = [√4, * 2 , 0] T .It is evident that J is a skew symmetric matrix, that is, J = −J T , and R is a positive semidefinite matrix.Consequently, if = 0 and () = 0, system (7) is a dissipative Hamiltonian system since In order to design the controller of system (7), we introduce the following definition.
the stochastic Hamiltonian system ( 7) is said to be asymptotically stable in the mean square, where () is the solution of system (7) at time under the initial condition ( 0 ) = 0 .
Next we introduce some auxiliary lemmas which will be used in this paper.
Controller Design of Stochastic Magnetic Levitation System
In this section, we will put forward the controller design scheme for stochastic magnetic levitation system (4).To this end, the stabilization problem of stochastic Hamiltonian system (7) is to be discussed first.
Consider system (7).Choose Lyapunov function as Suppose that the Hamiltonian function () ∈ C 2 and satisfies where and are positive scalars.
According to Itô differential equations, it can be obtained that where If we set suitable scalars and , then we have So the stabilization may be achieved by designing a suitable controller for system (7).The following theorem provides a feasible scheme.
Theorem 5 (consider system (7)).Suppose the Hamiltonian function () satisfies ( 17) and (18).Then, the closed-loop stochastic Hamiltonian system of ( 7) is asymptotic stable in the mean square under the feedback control law where and are scalars which satisfies Proof.Substituting ( 22) into (7) yields Combining ( 18), ( 20 Set = −/ and multiplying − to the two sides of the inequality (27), we have that is, Integrating inequality (29) from 0 to , we get that is, Since < 0, which implies that lim →∞ { − * 2 } = 0, (32) system ( 7) is asymptotic stable in the mean square under the feedback control law (22).This completes the proof.Remark 6.Since () ∈ C 2 , g 2 () are continuous functions and according to Lemma 2, we can conclude that the solution of the closed-loop system (23) is unique for any initial condition in the neighborhood of the equilibrium point * .
Next, we consider the stochastic magnetic levitation system (4).Obviously, there exist positive scalars and which make system (4) meet the inequalities ( 17) and (18) in Theorem 5; thus we can get the following conclusions.
Proof.Because of the stochastic magnetic levitation system (4) is equivalent to system (7), substitute g 1 into system (7) and the formula (6) into (22); we can get controller (33).The rest of the proof is omitted here.
Illustrative Examples
In this section, a simulation example is given to verify the correctness of the results obtained in this paper.The relevant parameters are given as follows: = 4 Ω, = 0.01 g, = 0.01 m, = 0.05, = 0.0098 N/g, and 0 = 0.1 cm.By calculating, we take = 100 and = 10.
According to Theorem 7, we can see that system (4) is asymptotically stable in the mean square under the feedback control law = −5060 1 ( − 2 ) . (34) The velocity curve of the rigid body is shown in Figure 2. It shows that the designed controller can make the system reach to the equilibrium point quickly.Figure 3 is the displacement curve of the rigid body; the displacement can also quickly reach to the equilibrium point.
Conclusion
This paper has investigated the control problem of stochastic magnetic levitation system.By regarding the magnetic levitation as the energy conversion device, we derived the mathematical model of single degree of freedom magnetic levitation system with stochastic disturbance from the point of view of the energy balance, and then the model can be transformed into a port-controlled Hamiltonian system.Then the controller of the stochastic magnetic levitation system has been designed based on the obtained Hamiltonian system model.Finally, the correctness of the conclusion has been verified by simulations.The main innovation of this paper is that we have fully taken into account the effect of random disturbances on the magnetic levitation system and solve the control problem under Hamiltonian systems framework by making full use of the dissipative structural properties of the Hamiltonian systems.
Figure 1 :
Figure 1: Physical model of magnetic suspension system. | 2018-12-08T03:03:35.662Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "3c24085b4c38b81d455ab0357578575b95768cc1",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2017/7838431.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3c24085b4c38b81d455ab0357578575b95768cc1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
269664075 | pes2o/s2orc | v3-fos-license | LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS
Purpose: Research on Jamu (traditional herbal medicine and herbs) in the protection of Intellectual Property Rights (IPR) is important for several reasons: First, traditional knowledge in the form of herbal medicine and traditional medicine has economic value and needs to receive legal protection. Second, the local community's lack of understanding regarding intellectual property means they are not interested in taking economic benefits from traditional knowledge. Method: The theoretical framework used is legal culture theory, while the research method used is research methods socio-legal Results and Conclusion: The research conclusions obtained: First, the protection of intellectual property for traditional knowledge of herbal medicine as traditional medicine shows a strong legal culture concept. Second, the process of resolving disputes that occur between traditional herbal medicine business actors in Lamongan Regency shows a process of moving away from court mechanisms. Implications of research : Research on the legal culture of intellectual property rights protection of traditional medicine business performers has important implications in several aspects, including law, culture and business. The following are several implications that may arise from the research. one of them is Better Legal Protection.
Implicaciones de la investigación: La investigación sobre la cultura jurídica de la protección de los derechos de propiedad intelectual de los empresarios de la medicina tradicional tiene implicaciones importantes en varios aspectos, incluidos el derecho, la cultura y los negocios.A continuación se presentan varias implicaciones que pueden surgir de la investigación.uno de ellos es Mejor Protección Legal.
INTRODUCTION
Jamu and herbs are important symbols of national identity and are understood as authentic elements of Indonesian culture, as their appeal lies in their connection to the Indonesian traditions.During Dutch colonialism in Indonesia, European doctors were deeply interested in herbal medicine.This was due to not knowing how to treat their patients' diseases that they encountered in the Dutch East Indies.In the end, several studies were carried out on these native Indonesian medicinal plants.Among the researchers was a German doctor named Carl Waitz who published a book on herbal medicine in 1829, titled: Practical Observations on Number of Javanese Medications.A Dutch physician and scientist named Adolphe Guillaume Vorderman also published books around the 1880s and 1890.One of them titled Javanese Medicines, about the use of medicinal plants by local residents.Vorderman released a manual on herbal medicine used in households throughout the Indies was published in 1911, titled Guidance and advice Regarding the Use of Indies Plants, Fruits, etc.There was another comprehensive book on native herbal medicine in the Indies published by Rumphius, who worked in Ambon during the early 18th century.He published a book titled Herbaria Amboinesis (Ambonese Spice Book).
Jamu, as one of the herbal plants that has been passed down from generation to generation, is widely planted in Indonesia as a blessing from natural conditions that are suitable for agricultural activities.Zamroni Salim and Ernawati Munadi said that as a natural laboratory that provides plant biodiversity -around 40,000 types -it provides a source of livelihood for the community in the form of food sources, industrial raw materials, pharmaceuticals and medicines.Based on the source, medicinal plants traded in Indonesia can be divided into cultivated medicinal plants and medicinal plants obtained directly (exploited) from the forest.Traditional knowledge of herbal medicine was originally a hereditary tradition that was passed down from one generation to the next.
Once the writing tradition was known, the initial traditional medicine oral tradition was finally written down.Research on herbs and herbal medicine as traditional medicine in the protection of Intellectual Property Rights is important because; first, traditional knowledge in the form of herbal medicine and traditional medicine has economic value and needs to receive legal protection both from the government as well as the attitude of the community itself.This will create the preservation of culture passed down from generation to generation is maintained.Second, the local community's lack of understanding regarding intellectual property means that they are not interested in taking economic benefits from this traditional knowledge.This creates an opportunity for pharmaceutical industries from developed countries to produce a traditional knowledgebased products without permission and giving reasonable compensation for the people who own the traditional knowledge.
The problems in this research include what is the legal culture of protecting traditional knowledge regarding traditional herbal medicine and herbs as medicine for herbal medicine industry players in Lamongan Regency? and how will Intellectual Property Rights disputes be resolved if there is a violation of brand rights for herbal medicine as traditional medicine?
THEORETICAL FRAMEWORK
The theoretical framework used in this research refers to legal culture theory.Law is essentially never separated from concrete facts that occur in the field.Law will always be related to the development of society, that law is not made but grows together with society's culture.According to Friedman, whether a law can run well will be influenced by 3 (three) things: legal substance, legal structure, and legal culture.Legal culture is a set of people's behavior towards the law, how people think, act and behave with the laws they believe in.In relation to the protection of Intellectual Property Rights for home herbal medicine business actors, it will be seen how they understand the existence of intellectual property rights protection for traditional knowledge that they have acquired through generations.
The community as the owner of traditional knowledge regarding the processing of traditional medicines for the community is essentially also perceived as a local legal culture.A statutory norm does not merely force a norm to be applied to the social environment.Society has its own norms which do not have to be in accordance with existing statutory norms.A legal society must essentially be viewed from its pluralistic form, various legal concepts that enrich the law of a country.Society lives by its philosophy and moral order which fortifies itself to always be firm and moral, as said by Sally Falk Moore, the moral order is formed through the evolution of law -morality, justice and conscience -which then becomes a guide for behavior and society.Bruno Fuad, F., Firmansyah, A., Sunaryo, E., & Machmud, A. (2024).
LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS
Latour said that the implementation of the law in one place is different from another (Johansen, 2021).
METHODOLOGY
The research method used in this research is research methods sociolegal.In this research, researchers will conduct research through in-depth interviews with household herbal medicine business actors located in Lamongan Regency, East Java.In-depth interviews were aimed at 4 (four) household herbal medicine business owners who distribute their herbal medicine production to the community in Lamongan Regency.The choice of location in Lamongan Regency is due to the fact that the locals still prefer herbal medicine and traditional medicine over medical drugs to treat any type of disease.Deep interview (depth interview) was aimed purposively at 4 (four) respondent subjects as household herbal medicine industry players.These respondents produce processed herbal medicine, sell it, and have been involved in the household herbal and herbal medicine industry.Therefore, they clearly understand the knowledge of processing traditional herbal medicine through generations in the Lamongan Regency.This in-depth interview aims to explore an in-depth understanding of legal concepts regarding traditional medicine by the subjects (ideological method).Field visits to locations where traditional medicines were developed by the respondents (descriptive method) can photograph the actual facts, as well as understand the concept of dispute resolution -applicable social norms -which are used in the world of traditional medicine.
This research uses an approach emic, In this case the researcher will look at the concept and understanding of law from the perspective of the subject being studied.Emic is an approach to social groups that is based on field studies with in-depth interviews to understand the meaning of each social interaction, because law is a manifestation of prevailing social values.This was done and aimed to understand how respondents view the herbal medicine business they are running in relation to the protection of intellectual property rights for herbal and traditional herbal knowledge as traditional medicine in Indonesia.The research was conducted within 2 (two) months, namely early June to the end of August 2020 in Lamongan Regency by conducting in-depth interviews with selected respondents.(Besson, 2023).This traditional knowledge is a unique traditional practice and lifestyle that lives and develops in society (Perangin-angin et al., 2020) (Hakim & Negara, 2018).Martha Tilaar and Bernard T. Widjaja said that the meaning of herbal medicine comes from the ancient Javanese word jampi, which implies the involvement of magic spells by a traditional shaman and the healing of illnesses using magic spells (Isnawati & Sumarno, 2021).
Jamu as a traditional herbal medicine is widely known for its use as Indonesian people's local traditions.No evidence has been found regarding exactly when Indonesians started the tradition of using this herbal medicine or herbal medicine (Lim, 2019).There is an opinion that says that knowledge about medicine in Indonesia existed before the arrival of Indian influence (Lim, 2019).This opinion says that before Indonesians could read and write, there was already a leader accompanied by a priest for ceremonies and a shaman for magic and medicine.The use of traditional medicines by the ancestors of the Indonesian people since centuries ago can be found through old manuscripts written on palm leaves such as Husodo (Java), Usada (Bali), Lontarak Pabbara (South Sulawesi), documents named Serat Primbon Jampi and Serat Racikan Boreh Wulang nDalem.
Hendri Wasito said that the history of the use of traditional Indonesian medicine can be Heritage and culture in the palaces also enriched the treasures of natural medicine from Indonesia.For example, the use of the noni plant as traditional Indonesian medicine.
It is recorded in wayang stories written during the reign of the kings and the Sunans.Proof of this can be seen from the presence of noni plants growing in the medicinal plant collection museum at the royal palace and the Kasunanan mosque.All information regarding traditional medicine from ancient times until now is still well preserved in the Surakarta palace library, namely Serat Centhini and Serat Kawruh.Serat Centhini contains a total of 104 types of plants which are mixed into 85 medicines to treat around 30 kinds of diseases and is one of the main sources of Javanese literary texts which contain writings about herbal medicine.Meanwhile, in the Serat Kawruh Jampi-Jampi Jawi chapter there are 1,166 recipes consisting of 922 recipes for natural ingredients.The book even contains 244 recorded recipes for tattoos, amulets, charms and spells.The uses of the many recipes range from preventing, curing disease, to beauty care for women (Ariyanti & Budi Asri, 2022).
Traditional medicinal knowledge is part of culture because it is mostly in oral form, passed down from one generation to the next and is knowledge shared by all members of the community and needs to be protected.Legal protection for traditional knowledge in the form of herbal medicine and medicine (Ariyanti & Budi Asri, 2022).
Agus Sardjono said that protecting traditional rights was to strengthen trade positions world and protecting local communities and returning intellectual property to local community wealth is knowledge of traditional Indonesian medicines has long been commercialized by other countries, to strengthen its position in the world trading system, to protect the interests of local communities.(Ramadan & Yanni Dewi Siregar, 2022).
According to Dutfield, there are reasons why traditional knowledge needs to be protected by law (Dutfield, 2022) see (Binga, 2019), it is useful for increasing the income of traditional communities, traditional knowledge can improve a country's national economy, protected the environment and biodiversity., avoiding exploitation of natural resources.Heri Aryanto said that an important thing that must also be thought about is the use of traditional knowledge by other countries without providing benefits (benefit) for Indonesia as the owner of traditional knowledge (Kurnilasari et al., 2018).Ar (38 years old), one of the herbal medicine business people in Lamongan, explained that (Wasitaatmadja, 2020b)."The herbal medicine brand that I own is not registered through IPR.I understand how important it is, but the registration process is too complicated.I only registered my herbal medicine business for a PIRT business permit.I just focus on selling the product and maintaining the quality of the taste, then my herbal medicine products will sell well on the market." When Ar as a respondent was asked whether the registration of intellectual property rights in herbal medicine processing was important enough to be registered or not, Ar emphasized (Wasitaatmadja, 2020c)."Actually, in my opinion, it is also important to register my herbal medicine products, but yes, it is a complicated matter, whereas my product is not much, but I have to pay more to take care of the registration." Based on Ar's explanation above, understanding the meaning of protection for traditional herbal knowledge in IPR is quite important, according to him.On the other hand, the process of obtaining IPR protection, both in terms of brand protection and recording trade secret licenses for the flavors of the herbal medicine that is processed, is Riswandi said that Traditional knowledge is primarily a natural practice, specifically in the areas of agriculture, fisheries, health, horticulture and forestry (Nainggolan et al., 2022).Traditional knowledge has become so popular in Southeast Asia that it has even entered the upper class market (Sujatha, 2020).
Regarding innovations in the processing of traditional herbal medicine traditions in Lamongan, Ar (38 years old), one of the traditional herbal medicine business actors, explained (Wasitaatmadja, 2020b)."I have been in the herbal medicine business since 2017, but the herbal recipe I have is a recipe passed down from my ancestors, then I modified it myself using my knowledge of herbal medicine."Kas also explained the same thing (Wasitaatmadja, 2020d): "Even though I have known herbal medicine recipes for generations, I often modify herbal recipes.In fact, what often happens is that herbal medicine makers modify herbal recipes and then keep the modified recipes a secret from other people."Cita Citrawinda Priapantja in Taufik H. Simatupang said that Tradition itself is defined as any system of knowledge, innovation and cultural expression that has been passed on through generations in a specific community in its region which has developed as a reaction to environmental changes (Simatupang, 2021).
This knowledge is created, maintained, used, and protected in traditional environments.The status and use of traditional knowledge is as part of culture.In the management and preservation of traditional medicine culture, according to the respondent's explanation, there is a pattern of knowledge passed down through generations in Lamongan in addition to the process of learning to mix so that traditional medicine processing can be modified by herbal medicine business actors in Lamongan.
Problems that also often arise in applying IPR protection to traditional healing knowledge include that traditional knowledge is created by communal communities while science and technology are developed by individuals, research teams, or entrepreneurs employed by companies (Nainggolan et al., 2022).
Traditional knowledge is validated by the use of knowledge in communal society, while scientific knowledge is validated by peer evaluation, and technology by its use and success in the marketplace.There are no formal reward mechanisms in traditional systems, whereas the reputation conferred by first discoveries is the dominant means of reward in science and the use of rent in technological systems (Correia et al., 2024).In an anthropological approach, it can be seen that there are various cultures that explain various processes of healing diseases.A person can choose to determine how he chooses a healing model for himself (Prasetyo et al., 2020).The protection of traditional knowledge regarding traditional medicines as a treasure of Indonesia's medicinal wealth is very important, it is necessary to think about creating a law that regulates this protection (Ayu & Wiryawan, 2019).Dwi Martini et.al said that the IPR regime is very individualistic, while ownership of traditional knowledge, especially traditional medicines, is shared ownership (collective ownership) (Kurnianingrum, 2018).
SETTLEMENT OF INTELLECTUAL PROPERTY RIGHTS DISPUTES ON TRADITIONAL MEDICINE KNOWLEDGE IN LAMONGAN REGENCY
Resolving various disputes not only using court facilities but also using other methods such as mediation actually shows a form of legal pluralism (Al-Ali & Tas, 2021).
Legal pluralism is a manifestation of the various complexities of tribes and ethnicities living together (Kings & Druce, 2020).According to Laura Nader and Todd, dispute resolution in various traditional cultures can be classified into several forms (Syarifuddin, 2019): First, lumping it (let).The act of allowing is an action taken by a person when he sees that the solution carried out through state apparatus will result in losses that are greater than the profits obtained.Second avoidance (evasive).In this case, the party who feels disadvantaged from the start does not want to deal with the party who is causing harm.Third, coercion (violence).The use of violence is carried out to solve problems and will end when one party is defeated by another stronger party.
Fourth, negotiation (negotiation).Negotiation is a way of resolving disputes where each party meets to find a solution to the problem without involving a third party.
Fifth mediation (mediation).In mediation the parties meet by involving a third party as a mediator to resolve the problem.Sixth, arbitration (arbitration).In arbitration, the parties involve an arbitrator and resolve the dispute through a win-win solution.Seventh, adjudication (Justice).The parties submit their dispute resolution through court, and the results are decided based on win-lose (win-lose).In essence, this choice of dispute resolution for a legal researcher shows that law must be seen from a very broad perspective, it is not just a matter of legal norms, but is also related to cultural and economic values (Syarifuddin, 2019).
Protection of Intellectual Property Rights for herbal medicine products that are not registered under brand rights and geographical indications, patent rights or brand rights is an interesting topic to discuss.This is because when a trademark dispute over the ownership of herbal medicine, as well as trade secrets regarding the composition of the ingredients in herbal medicine happens, the public cannot be protected by intellectual property rights law.In this case, the community has the idea of protecting their traditional knowledge.Several things have happened and caused disputes, especially the trademarks used by each traditional herbal medicine business actor.Ar explains (Wasitaatmadja, 2020b)."If I ever experience this, I choose to talk nicely with the people who use my herbal medicine.However, I'm sure that luck has already been arranged, it seems a bit naive." Ar stated that there had been parties who used his herbal medicine trademark without his permission, and he preferred to resolve this through a friendly dialogue process.The interesting thing he said was that he refused to use the courts as a dispute resolution process because this was related to the issue of sustenance which had been arranged by God.Another explanation was expressed by Fath (Wasitaatmadja, 2020c).
"Brand issues are not a problem among traditional medicinal plant businesses, each healer already knows whose brand this is and will not dare to use a brand that is not theirs.
Moreover, everyone who seeks treatment from me or other healers is generally only suitable for traditional healers.Our method of treatment is to give medicine according to the patient's body condition, so not all traditional medicines can be given to everyone.
Both have the same disease, but the treatment is definitely different because each patient's body condition is different.So selling products is not easy for everyone to use my products."Every doctor already knows that he can't use my medicine, he can't copy it, including copying my brand." Fath, as a certified traditional healer, explained that there are difficulties for other healers who are also certified to imitate other brands of medicine.Medicinal plants produced by traditional medicine makers have their own distinct characteristics.Fath further explained."Our fellow certified traditional medicine makers from the health service both have the same knowledge and learn from the same teacher.But in practice, every traditional medicine compounder has their own way of dealing with patients in practice, so each medicine compounder will have their own experience.This experience allows each herbalist to gain additional knowledge to improve their mixing abilities, and Every traditional medicine compounder has his or her own knowledge that differentiates one compounder from another.Each has its own character which is manifested in the brand symbol, and the secrets of compounding medicine from one compounder to another will be very different.Hence, stealing each other's secret medicine-mixing techniques is difficult, considering the different scientific characteristics of each compounder.Each compounder has his own characteristics of medicine which means that some of them will be reluctant to imitate the knowledge of other compounders.They prefer to improve their abilities through deepening their dispensing knowledge and experience in treating patients.The certified medicine makers are members of P-ASPETRI (Association of Members of All Indonesian Traditional Herbal Medicines).The reluctance to register a trademark is also caused by the costs it must incur.Fath explains (Wasitaatmadja, 2020c).
"The cost of registering our medicines with BPOM is already quite expensive, especially if you add in having to register our product brand.Meanwhile, the rates we charge to patients are cheap and not expensive."So I'm still thinking about costs, while waiting for a promise from the District Health Service which will provide convenience in registration costs and administration, and besides, there are rarely any conflicts regarding knowledge and brands, we both know." Kas explains the dispute resolution that will be carried out if someone commits a violation (Wasitaatmadja, 2020d)."Yes, of course we can leave that alone, all herbal medicine knowledge is actually the same and everyone must understand how to make herbal medicine.However, many herbal medicine makers often have additional secret recipes.It is also difficult to try to imitate the taste of herbal medicine made by someone, it is not very easy to imitate."Limited knowledge regarding the registration process, as well as the complexity of the IPR registration process for Lamongan traditional herbal medicine entrepreneurs, which are generally home-based herbal medicine businesses, makes them reluctant to register their trademarks, as well as record trade secret licenses in the form of secret recipe ingredients that are generally held for generations. .This will also be closely related to resolving disputes that arise, if there are parties who try to imitate Dispute resolution through court will require the parties to prove through registered trademarks as well as the complexity of the court process which is not understood by traditional herbal medicine businesses in Lamongan.According to Laura Nader, each cultural group will present several options for resolving disputes in court, whether through negotiation, violence or the judicial process.Of the four respondents from traditional herbal medicine businesses who were interviewed as respondents, none of them chose resolve disputes by taking the matter to the court.
Ar uses a negotiation process, namely inviting a good conversation with the party who has used the herbal medicine trademark.Fath prefers to use a letting approach (lumping it), he did this because every herbal medicine business actor has his own knowledge and it is so difficult to copy each other.In the settlement process described by Fath, it appears that each social unit has its own norms and methods (self-regulation) in resolving every problem and dispute faced.This can also be seen from the method chosen by Kas in terms of secret recipes and brands.Each party will be difficult to imitate because it has its own taste as a unique character of the herbal medicine that has been processed.
CONCLUSION
The conclusions resulting from research on the legal culture of intellectual property rights of traditional herbal medicine businesses are: 2. Second, the dispute resolution process that occurs between traditional herbal medicine business actors in Lamongan Regency shows a process of moving away from dispute resolution mechanisms through the courts.The choice is to negotiate or let go (lumping it) they prefer because of several obstacles they face, starting from the religious belief that everything is the will of God who regulates every human being's sustenance, to the differentiation process which makes it difficult for people to imitate, becoming a way to resolve disputes between traditional herbal medicine businesses in Lamongan Regency.
Fuad, F., Firmansyah, A., Sunaryo, E., & Machmud, A. (2024).LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS 8This traditional knowledge obtained from generation to generation also occurs in the scope of traditional herbal medicine businesses in Lamongan Regency.One of the home industry herbal medicine entrepreneurs, Kas (51 years), explained(Wasitaatmadja, 2020d) "i have mastered the herbal medicine that I make for a long time, and it has also been common knowledge here for generations.Everyone knows, in my opinion, how to mix herbal medicine, empon-empon, traditional medicine.But indeed, everyone has their own way of adding their own ingredients that other people don't know what the additional ingredients are."Based on Kas's explanation, it can be concluded that knowledge about processing herbal medicine as traditional medicine is a knowledge that is generally possessed by the Lamongan people.On the other hand, this knowledge is general in nature and passed down through generations.Meanwhile, it also has a certain level of secrecy which is developed by each herbal medicine business actor which is different between one herbal medicine business actor or traditional healer and another business actor.Sup (50 years old), one of the herbal and herbal medicine businesses as traditional medicine, explained (Wasitaatmadja, 2020e) "Apart from traditional knowledge which has been passed down from generation to generation, this traditional knowledge can now be known through some kind of course or education.Now many people can understand how to make traditional medicines in the form of herbs and herbs."This kind of course already exists in Lamongan to be able to mix traditional medicine."Sup explained that to be able to make herbal medicines and herbs as traditional medicine can be obtained by attending a course held in Lamongan Regency.The preservation of traditional knowledge in concocting herbal medicine as traditional medicine is not only passed down within the family and becomes a family's secret recipe, but has become more institutionalized in the form of education and courses in dispensing traditional medicine.The term traditional knowledge (traditional knowledge) as expressed by Sup and Kas above can also be referred to in the WIPO perspective as containing a broader meaning including indigenous knowledge and folklore.("indigenous knowledge would be the traditional knowledge of indigenous peoples, therefore is part of traditional category, but traditional knowledge is not necessarily indigenous.That is to say, indigenous knowledge is traditional knowledge, but not all traditional knowledge is indigenous").According to WIPO, traditional knowledge is distinguished from other knowledge due to its connection to local communities (indigenous people).Examples are all creations based on traditions related to knowledge systems, innovations and expressions of culture or science, inventions, scientific discoveries, designs, brands, names and symbols, confidential information, and all other innovations based on tradition and resulting from intellectual activities in the fields of industry, science and art.In its implementation, the protection of intellectual property rights for herbal medicine and herbs as traditional medicine depends greatly on the public's perception of the legal protection provided by the state.Regarding the protection of the confidentiality of this particular technique of dispensing herbal medicine for herbal medicine or traditional medicine businesses, this has never protected by intellectual property law.However, herbal medicine and traditional medicine businesses in Lamongan do protect the brand and confidentiality of traditional ingredients with certain additions.These businesses tend to keep license records fairly confidential within the family and never ask for protection of their rights and registration of licenses from the Ministry of Law and Human Rights.Kas explains that(Wasitaatmadja, 2020d): "This is about the confidentiality of herbal medicine concoction techniques.You don't need to register a permit with the government.For example, there are people who make herbal medicine, while many people understand how to make herbal medicine.Just how come this person has a different taste in herbal medicine from others?Of course, this person has his own secret that no one else should know about the secret of his herbal medicine.Well, I once asked what the secret was, but I wasn't told what the secret was.Keep it a secret and don't tell it to anyone else."The herbal medicine is made and sold by ourselves, without opening branches and not making any agreements like that." too complicated.This makes traditional herbal medicine traders in Lamongan Regency reluctant to obtain it.Ignorance of the process for registering trademark rights and trade secret protection, in addition to the extra costs that must be incurred by traditional herbal medicine practitioners, has resulted in them avoiding the IPR registration process.Fath, a certified health herbal medicine planter in Lamongan Regency who has been practicing in the world of traditional medicine for 20 (twenty) years, further explained (Wasitaatmadja, 2020c): "The knowledge I have gained has been passed down through generations, but I have increased my knowledge about medicinal plants through official education and I have also received a certificate as a traditional medicine preparer from the East Java Provincial Health Service.In Lamongan, there are only two traditional healers who have been certified by the health service.I mix the medicines that I produce myself based on the knowledge I have gained and I have given most of them brands, and some I have only given numbers.Each number indicates a healing use.I planted the medicinal plants myself in the back of the house and then mixed them, but regarding the problem of obtaining PIRT and brand registration, the health service had previously promised to help but until now it has not been realized."I am currently planning for this village to become a center for medicinal plants, so that the community could get economic value from the medicinal plants produced.Based on the explanations of the respondents above, in general these household traditional herbal medicine businesses have never registered their traditional knowledge protection rights with the Directorate General of Intellectual Property Rights.For them, it is enough to keep secrets within their own family for generations, and not open branches through brand licensing agreements and trade secrets.Their understanding also influences the legal protection of traditional knowledge of herbal medicine processing in Lamongan.Traditional knowledge also combines innovation and the amount of knowledge acquired and spread by communities through generations which is supported by ecology, environment, lifestyle, community behavior and culture.It can be concluded that traditional knowledge is the result of intellectual work that grows and develops from and within a communal society or certain community.M. Syamsudin and Budi Agus Fuad, F., Firmansyah, A., Sunaryo, E., & Machmud, A. (2024).LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS 14 each herbalist's experience is different."So stealing knowledge from each other would be very difficult, each traditional medicine maker has his own character and knowledge when he enters the field."Basedon Fath's statement, violating brand rights or trade secrets in the form of stealing each other's knowledge of concocting traditional medicine is quite difficult.
Fuad, F., Firmansyah, A., Sunaryo, E., & Machmud, A. (2024).LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS 15or even steal secret recipes or trademarks, both of which are not registered.Dispute resolution through adjudication or court institutions is something that is avoided by traditional herbal medicine businesses in Lamongan Regency.
1.
That in protecting intellectual property over traditional knowledge of herbal medicine as traditional medicine, it shows a strong legal cultural concept with its own dispute resolution mechanism.The existence of a statutory regulation regarding trademark rights in Law No. 20 of 2016 concerning Marks and Geographical Indications and trade secrets in Law No. 30 of 2000 concerning Trade Secrets in the intellectual property protection regime does not make them register a mark or register a license trade secret recipe.They have their own model of legal protection in the form of different flavors that cannot be imitated by other business actors, which further protects their herbal medicine business.Apart from that, not opening branches through licensing agreements with third parties is anFuad, F., Firmansyah, A., Sunaryo, E., & Machmud, A. (2024).LEGAL CULTURE OF INTELLECTUAL PROPERTY RIGHTS PROTECTION OF TRADITIONAL MEDICINE BUSINESS PERFORMERS 16 effort to protect the herbal medicine secrets they have passed down through generations; | 2024-05-11T16:21:41.798Z | 2024-05-06T00:00:00.000 | {
"year": 2024,
"sha1": "fa6b71260c1965530b137f8d23ef48a35d4033d4",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.journalsdg.org/jlss/article/download/3647/1770",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b2303e286333ee7cf45a87ec834bc8be04f17645",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
257434929 | pes2o/s2orc | v3-fos-license | Melatonin Treatment in Kidney Diseases
Melatonin is a neurohormone that is mainly secreted by the pineal gland. It coordinates the work of the superior biological clock and consequently affects many processes in the human body. Disorders of the waking and sleeping period result in nervous system imbalance and generate metabolic and endocrine derangements. The purpose of this review is to provide information regarding the potential benefits of melatonin use, particularly in kidney diseases. The impact on the cardiovascular system, diabetes, and homeostasis causes melatonin to be indirectly connected to kidney function and quality of life in people with chronic kidney disease. Moreover, there are numerous reports showing that melatonin plays a role as an antioxidant, free radical scavenger, and cytoprotective agent. This means that the supplementation of melatonin can be helpful in almost every type of kidney injury because inflammation, apoptosis, and oxidative stress occur, regardless of the mechanism. The administration of melatonin has a renoprotective effect and inhibits the progression of complications connected to renal failure. It is very important that exogenous melatonin supplementation is well tolerated and that the number of side effects caused by this type of treatment is low.
Introduction
Melatonin, discovered in the late 1950s, is a pleiotropic neurohormone that is mainly produced in the pineal gland. While it is released from different tissues, it acts also as a local regulatory molecule [1]. The elementary role of melatonin is to transmit information concerning the daily cycle of light and darkness to the different parts of the human body, which ultimately affects the functioning of the entire organism [2]. However, there are many reports showing that this is not the only mechanism and function of this particle. It has been proven that melatonin also takes part in antioxidative, anti-inflammatory, antiapoptotic, and immune processes [1,[3][4][5][6][7][8][9][10][11][12][13]. Moreover, melatonin participates in the detoxification of free radicals, bone formation, reproduction, and body mass regulation and has an influence on cardiovascular homeostasis [14][15][16][17]. The renoprotective effect of melatonin has been the subject of reports in the last decade that have found that melatonin not only ameliorates sleep disorders in patients with chronic kidney disease (CKD) but also has a beneficial effect on blood pressure and provides protection in oxidative stress and inflammation [18][19][20], which occur in a wide variety of kidney injuries such as CKD, glomerulonephritis, contrast-induced kidney injury, drug-induced nephrotoxicity, and acute ischemia-reperfusion injury. This review summarizes the physiology of action and the final effects of melatonin treatment in different types of kidney injuries.
The Biosynthesis and Metabolism of Melatonin
Melatonin is a neurohormone whose main source is the pineal gland [21]. The production of melatonin is dependent on the light/dark cycle. Interestingly, light can either suppress or initiate melatonin synthesis [22,23]. When it is received by the retina, the The production process of melatonin takes place mostly during the night [22,23]. Both the synthesis and secretion length are directly dependent on the duration of the sleep period. It is a time-based transmitter that conveys information about the round-the-clock cycle of light and darkness to the body [25,26]. However, it is important to emphasize that the pineal gland is not the only place where melatonin is synthesized. It is also produced by retinal photoreceptors [27], the gastrointestinal tract [28,29], bone The production process of melatonin takes place mostly during the night [22,23]. Both the synthesis and secretion length are directly dependent on the duration of the sleep period. It is a time-based transmitter that conveys information about the round-the-clock cycle of light and darkness to the body [25,26]. However, it is important to emphasize that the pineal gland is not the only place where melatonin is synthesized. It is also produced by retinal photoreceptors [27], the gastrointestinal tract [28,29], bone marrow [30], the liver, the kidneys, the thyroid, the pancreas, the thymus, the spleen, the carotid body, the reproductive tract, and endothelial cells. Human skin is also a place where all enzymes involved in production process are expressed [10]. There are two G-protein-coupled melatonin receptors: MT 1 and MT 2 [31]. After their activation, the intracellular level of the second messenger cyclic adenosine monophosphate (cAMP) is decreased. The result is a modification of signaling pathways below protein kinases A and C, and a cAMP reaction with an element-binding protein [10,32]. Melatonin receptors are widespread. Most of them are distributed in the central nervous system, but they are also located in peripheral body parts such as the retina, cerebral and peripheral arteries, the kidneys, the pancreas, the adrenal cortex, the testes, and immune cells [33,34].
The Nervous System
It is well known that the concentration changes of melatonin take part in sleep-wake cycle disorders, mood disturbances, disabilities of cognitive skills, troubles with learning and memory problems, protection of the nervous system, drug abuse, and cancer processes. Therapies based on pharmacological agonists of melatonin (agomelatine, ramelteon, and tasimelteon), which also affect MT1/MT2 receptors, have been the subject of research interest in recent years [35]. Melatonin can be a potential course of action for novel antidepressants, which affect the concentrations of neurotrophins or neurotransmitters. In addition, they cause a reduction in the proinflammatory cytokine level in the serum [36]. The neuroprotective effect of melatonin is used in the treatment of Alzheimer's, Parkinson's, and Huntington's diseases as well as amyotrophic lateral sclerosis, stroke, and brain trauma [7,37]. Due to its antioxidant properties, melatonin acts as a scavenger of free radicals and regulates numerous reactions at the molecular level, including oxidative stress, inflammation, and apoptosis [38,39]. It has also been documented that melatonin is an inhibitor of calpain, whose activity is significant in the pathogenesis of many central nervous system disorders [40].
The Immune System
Another important role of melatonin is its ability to immunomodulate and strengthen immune surveillance [41]. It stimulates the production of different lines of cells involved both in humoral and cell-mediated immunity, such as macrophages, natural killer cells, and CD4+ cells, and affects the synthesis of a wide variety of cytokines [42]. The direct antiviral and anti-bacterial effects of melatonin have been documented [43][44][45]. During severe infections, the administration of melatonin has been found to have immunomodulatory, antioxidative, and cytoprotective functions [44,46,47]. It has been proven that due to its beneficial pleiotropic effects, the administration of melatonin reduces mortality in both viral and bacterial inflammation [48]. Considering the evidence supporting the role of this hormone in directing oxidative stress and inflammatory processes, as well as the management of immune reactions, examinations involving patients with viral infections caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) have also been conducted. They showed that the administration of melatonin as an adjuvant therapy might be beneficial and that it should be considered during Coronavirus Disease 2019 (COVID-19) [6,49,50].
The Gastrointestinal Tract
Melatonin in the digestive system, besides its antioxidant effect and ability to stimulate the immune system, reduces the secretion of hydrochloric acid, enhances the regeneration of the epithelium, and increases microcirculation. All of these functions make melatonin one of the therapeutic options for preventing different diseases of gastrointestinal tract, e.g., colorectal cancer, ulcerative colitis, gastric ulcers, and irritable bowel syndrome. It may also be helpful during treatment [51,52]. It has been documented that melatonin supplementation results in the complete remission of gastroesophageal reflux disease. It has a protective role against acute and chronic irritants that affect the esophagus and stomach. It is also effective in healing ulcers [53]. Moreover, some studies have confirmed that melatonin has strong supporting effects on hepatocytes in the prevention of non-alcoholic steatohepatitis (NASH) [54].
The Respiratory System
A positive therapeutic potential of melatonin has been also discovered in patients suffering from pulmonary disorders. Melatonin prevents inflammation, eliminates several oxygen-derived reactants, detoxifies nitric oxide, and takes part in apoptosis, including in cancer cells. It also inhibits the proliferation of these malignant cells [55][56][57]. In a fairly similar mechanism, melatonin influences the restriction of pulmonary fibrosis. It reduces endothelial cell proliferation, invasion, and migration [55,58]. Moreover, it can minimize the gathering of inflammatory cells and reduce the expression of inflammatory mediators, such as cyclooxygenase-2. The amount of proinflammatory cytokines also decreases, which consequently leads to the inhibition of cellular proliferation [59]. The role of melatonin during respiratory tract infections is also important and is discussed in the section regarding the immune system.
Endocrinology and Gynecology
Clinical trials with melatonin administration have been conducted on animal and human groups. According to contemporary knowledge, melatonin can improve fertility, oocyte quality, maturation, and the number of embryos [14,60]. Moreover, a positive effect during pregnancy has been suggested. The protection of neurogenesis, a supportive impact on the placenta, and a reduction in oxidative stress are mechanisms that increase the reproductive rate and improve embryo-fetal development [60,61]. Reactive oxygen species cause disturbances during pregnancy. They are also responsible for complications in the perinatal period. Melatonin is a scavenger of free radicals. It has also antioxidative and cytoprotective abilities. There is a possibility that it can be crucial for a successful pregnancy. Not only is the role of melatonin in human gendering important, its support is also necessary when neonatal pathologies occur [62,63]. Melatonin is a supervisor during the process of deoxyribonucleic acid (DNA) methylation and histone alteration. In this way, radical changes in gene expression are avoided. The fetus is protected from the occurrence of pathologies. An insufficient concentration of melatonin during pregnancy can leave endocrine disorders in the genetic code during early ontogenesis, which subsequently develop in childhood [64].
Other Functions
Some additional, very important functions of melatonin in the human body are described in the section about kidney diseases below. The roles of this hormone in different parts of human body are summarized in Figure 2.
The Role of Melatonin in Chronic Kidney Disease
Chronic kidney disease (CKD) is currentlyone of the leading public health problems worldwide. It affects almost 13% of the world population [65]. CKD is associated with many complications, such as malnutrition;anemia;hyperlipidemia; overhydration;and endocrine, mineral bone, and metabolic disorders [65][66][67]. Patients with CKD, especially those who are treated with renal replacement therapy, frequently suffer from sleep disturbances [68]. It has been reported that 80% of end-stage renal disease patients complain about sleep disorders [69][70][71][72]. The complications of CKD intensify insomnia as well as depression, anxiety, and itch, which often affect patients with decreasedkidney function [73][74][75]. A chronic deficit of sleep can lead to metabolic and endocrine disorders, e.g., diabetes, obesity, or hypertension [76][77][78][79][80][81]. Behavioral interventions and pharmacological treatments are often not sufficient [82]. There is convincing evidence that melatonin efficiently accelerates falling asleep; regulates the duration of wake times; and improves concentration, reflexes, and cognitive functions [83][84][85][86][87][88]. It has been documented that hemodialysis patients suffer from disturbances in the diurnal rhythmicity of the sleep-wake cycle and melatonin concentrations [89,90]. Melatonin synchronizes the circadian rhythms, improves the quality of sleep, and is involved in neuronal survival [91]. It has been found that it prevents further implications of sleeplessness such as neurodegenerative diseases [83,92]. Melatonin administration is recommended for different types of sleep disorders, as it synchronizes the circadian rhythms, depending on the time of day when the drug is taken [93]. Edalat-Nejad et al. carried out a 6-week randomized, double-blind, cross-over clinical trial in hemodialysis patients. Melatonin was administered to patients at bedtime. As a result,the quality of sleep improved [94]. It should not be overlooked that melatonin supplementation is well tolerated, witha small number of side effects [87][88]91]. In the available clinical trials, exogenous melatonin is an effective drug with a low risk of dangerous side effects in patients with CKD [68,95,96]. Ramelteon, a melatonin-receptor agonist, is also approved for the treatment of insomnia. It has been reported that it is safe and effective [97].
The Role of Melatonin in Chronic Kidney Disease
Chronic kidney disease (CKD) is currently one of the leading public health problems worldwide. It affects almost 13% of the world population [65]. CKD is associated with many complications, such as malnutrition; anemia; hyperlipidemia; overhydration; and endocrine, mineral bone, and metabolic disorders [65][66][67]. Patients with CKD, especially those who are treated with renal replacement therapy, frequently suffer from sleep disturbances [68]. It has been reported that 80% of end-stage renal disease patients complain about sleep disorders [69][70][71][72]. The complications of CKD intensify insomnia as well as depression, anxiety, and itch, which often affect patients with decreased kidney function [73][74][75]. A chronic deficit of sleep can lead to metabolic and endocrine disorders, e.g., diabetes, obesity, or hypertension [76][77][78][79][80][81]. Behavioral interventions and pharmacological treatments are often not sufficient [82]. There is convincing evidence that melatonin efficiently accelerates falling asleep; regulates the duration of wake times; and improves concentration, reflexes, and cognitive functions [83][84][85][86][87][88]. It has been documented that hemodialysis patients suffer from disturbances in the diurnal rhythmicity of the sleepwake cycle and melatonin concentrations [89,90]. Melatonin synchronizes the circadian rhythms, improves the quality of sleep, and is involved in neuronal survival [91]. It has been found that it prevents further implications of sleeplessness such as neurodegenerative diseases [83,92]. Melatonin administration is recommended for different types of sleep disorders, as it synchronizes the circadian rhythms, depending on the time of day when the drug is taken [93]. Edalat-Nejad et al. carried out a 6-week randomized, double-blind, cross-over clinical trial in hemodialysis patients. Melatonin was administered to patients at bedtime. As a result, the quality of sleep improved [94]. It should not be overlooked that melatonin supplementation is well tolerated, with a small number of side effects [87,88,91]. In the available clinical trials, exogenous melatonin is an effective drug with a low risk of dangerous side effects in patients with CKD [68,95,96]. Ramelteon, a melatonin-receptor agonist, is also approved for the treatment of insomnia. It has been reported that it is safe and effective [97].
The dysregulation of the circadian rhythm is connected with a higher risk of cardiovascular events [98][99][100][101][102]. The strong association between cardiac reactions and time of day is the reason why myocardial infarction (MI), sudden cardiac death, and ischemic stroke are more likely to occur in the early morning [103][104][105]. Moreover, there have also been examinations assessing healing after MI, depending on the circadian rhythm [106,107]. It was proven that any disruptions result in alterations in immune responses, which are crucial for scar formation and the functioning of the heart in the future. During a clinical trial with mice in the proliferative phase, 1 week after MI, less blood vessels formed in the infarcted area compared with the control group. Echocardiography after 14 days showed increased left ventricular dilation and infarct expansion [106]. This proves that the stabilization of the biological clock is needed to maintain homeostasis in the whole organism and support recovery [108]. It has been found that melatonin provides protection by activating silent information regulator 1 (SIRT1), which acts in a receptor-dependent manner. It causes a reduction in apoptotic protein expression and an increase in antiapoptotic protein [109]. Moreover, melatonin protects cells from reactive oxygen species [110,111]. It results in an improvement in cardiac function, a reduction in oxidative damage, and a decline in myocardial apoptosis [112,113]. Oxidative stress causes cellular injuries in the vascular system and induces inflammatory processes [114,115]. It is directly involved in the pathogenesis of cardiovascular diseases [98]. Among patients in the early stages of chronic kidney disease, the incidence of cardiovascular events is significantly higher. Moreover, the prevalence rate increases commensurably with the advancement of kidney function deterioration [116]. Unfortunately, cardiovascular diseases are a frequent reason for the increased morbidity and mortality in this group of patients [43,[117][118][119][120]. New therapies are sought to prevent cardiovascular complications in CKD patients. Antioxidant therapies and clinical trials are being conducted [121]. Melatonin has been documented to participate in controlling oxidative stress and has been shown to have a positive influence on the cardiovascular system [98,122,123].
Atherosclerosis and hypertension are also frequent complications of CKD [120,124]. On the other hand, hypertension can lead to a deterioration of kidney function and is the second leading cause of end-stage renal disease [125]. There are several mechanisms of hypertensive renal damage, such as the renin-angiotensin-aldosterone system (RAAS), oxidative stress, endothelial dysfunction, and genetic determinants. Inflammation and fibrosis lead to glomerular sclerosis, tubular atrophy, and interstitial fibrosis [126]. In addition, advanced kidney disease can cause difficulties in blood pressure normalization [127]. It has been proven that melatonin may play a role in reducing blood pressure during the day and night by influencing changes in the endothelium and in the functioning of the autonomic nervous system and the renin-angiotensin system. Moreover, it reduces oxidative stress [68,128]. A clinical trial whose aim was to observe the regulation of blood pressure by melatonin showed that after pinealectomy and in other cases when the concentration of melatonin in the plasma decreased, patients should receive melatonin supplementation in order to maintain the correct blood pressure values [129]. Melatonin also takes part in delaying atherosclerosis [130][131][132]. The degree of changes in blood vessels is higher in proportion to the progression of renal disease. The pathogenesis is connected to systemic inflammation and the increased amount of reactive oxygen species [120,133]. Increased carotid artery intima-media thickness, carotid arterial wall stiffness, and coronary artery calcification are also common in children with chronic kidney disease [134]. The study by Zhang proved that melatonin reduces the atherosclerotic plaque in the aorta [135]. This is very important because the stage of atherosclerosis is a strong predictor of the mortality rate due to cardiovascular disease in patients with CKD [133,136]. Anti-inflammatory treatment strategies could be beneficial [137].
Obesity and diabetes are states that often coexist with CKD and deepen kidney injury [138][139][140]. Diabetic nephropathy occurs in one third of diabetic patients [141]. Adipocytes trigger the release of proinflammatory cytokines [138,142]. Serum concentrations of C-reactive protein, adiponectin, resist in, interleukin-6, tumor necrosis factor-alpha, monocyte chemoattractant protein-1, and CD68 are responsible for chronic inflammation [143]. While the levels of these molecules are high in blood circulation, they bind with receptors that are located in the cell membranes of renal tissues. As a result of this connection, the kidneys are damaged [144]. Melatonin, with its antioxidant and anti-inflammatory abilities, influences this process through several mechanisms such as NF-κB amelioration and NLRP3-inflammasome signals, reducing proinflammatory protein expression in the serum, regulating metabolic conversion and the energy balance, activating receptors in adipocytes, and sensitizing adipocytes to insulin and leptin [145,146]. Obesity is associated with hemodynamic, structural, and histological renal changes [138]. The hypertrophy of glomeruli and tubules is observed. Focal segmental glomerulosclerosis, or bulbous sclerosis, is also possible. The gradual progression of nephropathy is connected with renal hemodynamic changes, insulin resistance, and lipid metabolism disorders [147]. These effects are crucial for the progression of kidney diseases and are also associated with the advancement of cardiovascular complications. Melatonin has a hypolipidemic impact by enhancing endogenous cholesterol clearance mechanisms [148,149]. It takes part in bilirubin acid production and suppresses low-density lipoprotein receptor activity. It also causes an increase in the level of irisin and accelerates cholesterol excretion in the feces [150]. In an experiment with mice, lipid accumulation varied considerably during the administration of melatonin. It was reduced, and the expression of lipid metabolic genes was minimized [151]. Melatonin not only influences the causes of hypercholesteremia but also protects tissues from the toxic effects of oxidized lipoproteins [152]. It is assumed that this is an effect of melatonin's impact on cell membranes [153,154]. Melatonin treatment has also been proven to be beneficial in diabetic patients [155][156][157]. Alack of this hormone causes a reduction in glucose transporter type 4 (GLUT4) gene expression, which results in the development of glucose intolerance and insulin resistance [156]. Moreover, melatonin enhances insulin secretion and β-cell existence. Islet sensitivity to cAMP is higher during melatonin supplementation, and it results in an increase in insulin secretion [158]. Simultaneously, melatonin stimulates glucagon release [155]. It has been documented that there are some occasional changes in the coding of the melatonin receptor gene MTNR1B [155,157]. This consequently causes an inhibition of melatonin binding and information transmission. This is ultimately associated with a higher risk of type 2 diabetes mellitus [155][156][157][159][160][161]. A reduced level of melatonin is also implicated in the pathogenesis of type 2 diabetes [160,161]. A study by Mok that included numerous clinical trials with the use of melatonin supplementation in patients with diabetes proved that melatonin administration may be a new therapeutic method to improve the diabetic condition and reduce the prevalence of diabetic complications [159,161].
The Role of Melatonin in Glomerulonephritis
Glomerulonephritis is a collection of different types of kidney damage at the level of the glomerulus. This group has a variety of causative factors. However, most of them are the consequences of immune processes [162].
The etiopathogenesis of lupus nephritis is not well understood. It is connected to the activation of the NLRP3 inflammasome, which is a member of the NLR family (NOD-like receptors). It is involved in the synthesis of proinflammatory cytokines [163,164]. Histological examination reveals severe renal damage. The widening of tubules and capillaries and the extendedness of the mesangial matrix can be observed. It is manifested by hypertrophy of the mesangium, glomerular atrophy, and the thickening of the capillary walls and basement membrane. It has been proven that the described alterations are reduced during melatonin treatment [165]. Melatonin, with its important antifibrotic, antioxidative, anti-inflammatory, and pro-and anti-apoptotic effects, inhibits lupus-related nephropathy and gives a chance to avoid the complications of autoimmune diseases [166,167].
Focal segmental glomerulosclerosis (FSGS), which is another type of glomerulonephritis, depends on focal and segmental glomerulosclerosis with tubular involution and interstitial fibrosis [168]. One of the discovered candidate genes, which allowed researchers to know the exact mechanism of FSGS and find guidelines for the diagnosis and therapy of the discussed disease, is Melatonin Receptor 1A (MTNR1A) [169].
The efficacy of melatonin therapy was also assessed in a group of patients with membranous nephropathy. There were several beneficial aspects during melatonin treatment: a significant reduction in proteinuria, an improvement of glomerular damage, a decreased deposition of immunocomplex, a decrease in the subpopulation of CD19+ B cells, and proinflammatory cytokines with a one-step increase in the expression of anti-inflammatory cytokines. Moreover, the secretion of reactive oxygen species was minimized. All these findings show that melatonin treatment prevents the development of membranous nephropathy using many different pathways [170].
The Role of Melatonin in Contrast-Induced Kidney Injury
Contrast-induced acute kidney injury (CI-AKI) is a condition in which a progressive deterioration of kidney function is observed a few days after contrast administration. Precisely, it should be described as an increase in serum creatinine ≥0.3 mg/dL or ≥1.5-1.9 times above normal in the 48-72h following contrast medium administration [171]. This is consistent with the definition of AKI in The Kidney Disease: Improving Global Outcomes. This situation is a result of the direct impact of contrast medium on the kidneys. Nephrotoxicity is manifested by damage to tubular epithelial cells. Moreover, vasoactive molecules are released. They stimulate oxidative stress, leading to ischemic injury [172]. The direct cytotoxic effect and the hemodynamic alterations are two key mechanisms in the pathophysiology of CI-AKI [173]. Endothelial cell apoptosis and inflammation also occur during CI-AKI [174]. All these changes lead to eGFR reduction [175]. Multiple pharmacologic strategies, usually in connection with maintaining proper hydration, are still used to prevent CI-AKI [176]. In many trials, various techniques to avoid injuring renal cells with free radicals were attempted, but the roles of most of them, including antioxidant agents, remain unclear [177,178]. The most extensively studied techniques are N-acetylcysteine, which removes reactive forms of oxygen from the organism, and nitric oxide, which dilates the vessels [178]. The role of free radicals is crucial in renal vasoconstriction. Numerous data have shown the renoprotective effect of melatonin, as assessed by the normalization of creatinine and urea in the serum, positive alterations in histological examination of renal tissues, and decreases in the levels of early indicators of kidney injury and neutrophilgelatinase-associated lipid (NGAL) injury [179]. The use of melatonin as a premedication in examinations with contrast caused a relevant enhancement of the expression of Sirt3 and decreased the ac-SOD2 K68 level. As a result, oxidative stress was significantly decreased [180]. Taking into account the anti-inflammatory function of melatonin, it may play a role as one of the effective preventive strategies for CI-AKI [180].
The Role of Melatonin in Treatment-Induced Nephrotoxicity
Acute kidney injury is a common complication of drug administration. Drug nephrotoxicity is divided into two types, depending on the pathomechanism. The first one is mediated by inflammation and is commonly referred to as acute interstitial nephritis. It is usually caused by an allergic reaction. The second type is known as toxic acute tubular necrosis. It occurs when the pharmacologic agents or their metabolites act as direct tubular toxins [181]. There are numerous factors that influence the kidney response to pharmacological treatment. Some of them depend on patient (gender, age, and genes), drug (dose, solubility, and direct nephrotoxic effect) and kidney factors (blood flow and proximal tubular uptake of the drug) [182]. The most common risk factors among patients suffering from drug-induced nephropathy include an elderly age, dehydration, pre-existing renal dysfunction, and the simultaneous use of other nephrotoxins [183]. Unfortunately, incomplete renal recovery is observed in one third of patients with drug-induced nephrotoxicity. The duration of injury prior to diagnosis is important [184]. It is not easy to observe the early symptoms of kidney damage because minor changes in renal function are often clinically asymptomatic [185]. The metabolism of drugs is a process coordinated by multiple renal enzyme systems, including CYP450 and flavin-containing monooxygenases. During biotransformation, toxic metabolites and reactive oxygen species are produced. The accumulation of these molecules leads to oxidative stress. All these reactions contribute to kidney injury [186]. There are several well-documented drugs that cause acute kidney disease. The prolonged administration of cyclosporine A (CsA) and alterations in the structures of kidneys were studied using electron microscopy and morphometry. Apoptosis and the necrosis of proximal tubules with dislocated brush borders, swollen mitochondria, multiple lysosomes, unformed basement membranes of the glomeruli, and atypical mesangial matrices were observed. Treatment with melatonin partially prevented these disturbances/disorders. Melatonin also attenuated damage caused by CsA [187]. Renal fibrosis was observed following treatment with CsA and exposure to carbon tetrachloride (CCl4). Melatonin minimized the accumulation of leukocytes by reducing the expression of iNOS and p38-mitogen-activated protein kinase (MAPK). It also protected kidneys against the flow of mononuclear cells and fibrosis, which are induced by CCl4 [188]. Nephrotoxicity is also the main adverse outcome of vancomycin administration. There is also a connection between vancomycin and oxidative stress. After the administration of vancomycin, the production of intracellular reactive oxygen forms in LLC-PK1 cells located in the renal tubules was higher and caused cellular apoptosis [189]. The supplementation of melatonin reduced the episodes of acute kidney injury during treatment with this antibiotic [190]. This is similar to cisplatin therapy. The number of renal complications is relatively high [191]. The apoptosis and necroptosis of renal cells are the pathomechanisms of kidney injury during treatment with cisplatin. The anti-inflammatory properties of melatonin allow it to inhibit these processes. This is possible due to the upregulation of RIPK1, and the RIPK3-multiprotein complex, which plays a crucial regulatory function in the initiation of cell death, is significantly attenuated by melatonin [192].
When it comes to different types of therapy, drugs are not the only treatment that can cause nephrotoxicity. Radiation is also limited by the possibility of kidney injury. It depends on the dose and time of exposure. It is estimated that after 6 months of radiotherapy, patients develop latent acute nephritis, and chronic nephritis may occur after 18 months of the treatment. Melatonin scavenges hydroxyl radicals, inhibits nitric oxide synthase, and increases the stimulation of antioxidant enzymes with significant functions in the organism, including superoxide dismutase and glutathione dismutase. The protective effect of this hormone was assessed using both light and electron microscopy [193].
The Role of Melatonin in Acute Ischemia-Reperfusion Injury
Surgical procedures also sometimes result in acute kidney injury. Cardiac surgery and renal transplantation are operations during which melatonin treatment has a positive effect on kidney deterioration. Cardiac surgery causes renal dysfunction in approximately 7.7% of patients [194]. Acute kidney injury during cardiac surgery is associated with higher morbidity and mortality and often prolongs hospitalization. There are several factors that lead to renal dysfunction after cardiac surgery, such as nonpulsatile blood flow; catecholamines and mediators of inflammation that circulate in the blood; and kidney injury caused by an embolus and free hemoglobin, which is released from destroyed erythrocytes. All of them lead to renal complications [195]. The time of ischemia correlates with the degree of kidney damage and its irreversibility (Table 1) [196]. There are several examinations that suggest a potential therapeutic effect of melatonin in minimizing renal injury during ischemia and reperfusion [197][198][199]. A histopathological assessment of kidney injury after melatonin treatment showed that the damage was smaller [197]. This may be explained by multiple pathways, e.g., a lower level of the lipid peroxidation marker malondialdehyde, higher activity of superoxide dismutase and catalase, reduced apoptosis due to minimized DNA damage, and a suppression of inflammation, which is expressed by reductions in the concentrations of tumor necrosis factor-alpha, interleukin-1β, nuclear factor kappa B, kidney injury molecule-1, IL-18, matrix metalloproteinase, and neutrophil-gelatinase-associated lipocalin [200].
The anti-inflammatory and anti-oxidative functions of melatonin have also been observed among kidney transplant recipients. The administration of melatonin led to improved kidney function after renal transplantation. It was manifested by serum levels of biomarkers such as neutrophil-gelatinase-associated lipocalin, whose concentration was significantly decreased after the administration of melatonin [201]. In addition, the inhibition of oxidative stress, apoptosis, and the secretion of proinflammatory cytokines and the impedance of neutrophil and macrophage accumulation as well as increased autophagic outflow were observed during melatonin treatment [197]. In conclusion, melatonin, with its antioxidant effects, is potent for inverting the consequences of acute kidney ischemia [202].
Summary
Melatonin has both direct and indirect influences on the well-being of patients with CKD. Firstly, it regulates the day and night cycle and maintains the proper quality of sleep. Sleeping disorders lead to depression and behavioral complications. Deficient sleep is connected to physical weakness, an increased level of aggression, and attention disorders. It also restricts the social life and personal development due to memory deterioration or reductions in performance on physical examinations. Furthermore, abnormalities in melatonin production or secretion are connected to pathologies of the nervous system, such as Alzheimer's and Parkinson's diseases. Besides controlling the circadian rhythm, the melatonin profile in human physiology has certain additional effects related to the glucose balance, the control of blood pressure, phosphocalcic metabolism, and hemostasis. The process of kidney degeneration is often an implication of the frequent prevalence of arterial hypertension, diabetes mellitus, atherosclerosis, and obesity. Melatonin has been proven to have beneficial effects in all of these complications. Melatonin also plays a direct renoprotective role. Moreover, it can be helpful in almost every type of kidney injury because inflammation, apoptosis, and oxidative stress occur without regard to the mechanism. Melatonin regulates mitochondrial metabolism and ATP production and protects mitochondria. It inactivates free radicals by attaching one or more electrons and thus reduces oxidative stress. Due to these mechanisms, melatonin enables normal mitochondrial functions and protects patients from subsequent apoptotic implications and the death of kidney cells. The role of this hormone in kidney disease is summarized in Figure 3.
So far, there have been many studies of exogenous melatonin administration to animals. However, the number of human studies with the use of melatonin is not high, but it is increasing. According to the available clinic trials, melatonin can improve the quality of life and prolong survival in patients with CKD.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of interest:
The authors declare no conflict of interest.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-03-11T16:02:29.752Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "ee91ec8d25a79273d79563f201ab977e8ddc6bb7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/6/838/pdf?version=1678267680",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9b22c2c042f280b9a90e0973bc8b000f0b5e514",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267132720 | pes2o/s2orc | v3-fos-license | Inclusive Education Practices: Fostering Diversity and Equity in the Classroom
This journal article examines inclusive education practices with a focus on fostering diversity and equity within the classroom. Recognizing the importance of accommodating diverse learning needs, the study explores strategies and approaches to create an inclusive educational environment. The research underscores the significance of adopting a student-centered approach that acknowledges and respects individual differences. It delves into the implementation of flexible instructional methods, personalized learning plans, and varied assessment strategies to cater to a broad spectrum of learners, including those with diverse abilities and backgrounds. Furthermore, the article investigates the role of teacher professional development in promoting inclusive practices. It explores how educators can enhance their skills and knowledge to create inclusive classrooms that embrace diversity. The study emphasizes collaborative efforts among educators, support staff
Introduction
Education is the cornerstone of societal development, and its impact resonates profoundly when it embraces inclusivity.This study, titled "Inclusive Education Practices: Fostering Diversity and Equity in the Classroom," addresses the imperative to create learning environments that accommodate diverse needs and backgrounds.In this introduction, we explore the background, research gap, urgency, previous studies, novelty, objectives, and potential benefits of investigating inclusive education practices.
The landscape of education is evolving, and the call for inclusivity has become increasingly prominent.Inclusive education, defined as the practice of accommodating students of all abilities and backgrounds within mainstream classrooms, is gaining recognition as an essential component of a just and equitable educational system.Embracing diversity in the classroom not only reflects societal pluralism but also nurtures a more comprehensive and enriching learning environment.
While strides have been made toward inclusive education, a significant research gap persists in understanding the nuanced dynamics of its implementation.The intricacies of fostering diversity and equity in the classroom, considering various learning styles and needs, warrant closer examination.Existing research often lacks a granular exploration of the practical strategies, challenges, and outcomes associated with inclusive education practices.
The urgency of investigating inclusive education practices lies in their potential to address educational disparities and contribute to the development of a socially just society.As global communities become more diverse, educational institutions must adapt to cater to the unique requirements of every learner.Understanding the practical dimensions of inclusive education is crucial for educators, policymakers, and stakeholders committed to fostering equitable learning environments.
Past research has predominantly explored the theoretical underpinnings and philosophical foundations of inclusive education.However, there is a dearth of studies that delve into the day-to-day implementation strategies and experiences within diverse classrooms.By building upon previous research, this study seeks to bridge the gap between theory and practice, offering practical insights into the lived reality of inclusive education.
The novelty of this research lies in its focus on the practical facets of inclusive education practices.Rather than reiterating theoretical frameworks, this study aims to contribute a nuanced understanding of how inclusive principles manifest in actual classroom settings.By uncovering novel strategies, challenges, and success stories, the research strives to offer a fresh perspective on the dynamic nature of inclusive education.
The primary objectives of this study are to explore the diverse strategies employed in inclusive education, identify challenges faced by educators, and assess the outcomes of inclusive practices on both students and the broader learning community.The potential benefits extend to informing educational policies, guiding teacher professional development, and ultimately fostering a more inclusive, equitable, and enriching educational experience for all learners.
Research Design:
This study adopts a mixed-methods research design to comprehensively explore inclusive education practices.The integration of qualitative and quantitative approaches allows for a multifaceted investigation, capturing both the depth and breadth of the phenomenon.This design aligns with the study's aim to uncover the practical strategies, challenges, and outcomes associated with fostering diversity and equity in the classroom.
Participants:
The participants in this research will include educators, students, and administrators from diverse educational institutions.A purposive sampling technique will be employed to ensure representation across various educational levels, disciplines, and socio-cultural contexts.The inclusion of participants with diverse perspectives enhances the richness and applicability of the study's findings.
Data Collection:
-Interviews: In-depth interviews with educators will be conducted to gain insights into their experiences, perceptions, and strategies related to inclusive education.These interviews will be semi-structured, allowing for flexibility and depth in exploring individual experiences.
-Surveys: Surveys will be administered to both educators and students to gather quantitative data on the prevalence of inclusive practices, perceived benefits, and challenges.The survey instruments will include both closed-ended and Likert-scale questions to facilitate statistical analysis.
-Classroom Observations: Direct observations of inclusive classrooms will be conducted to observe the implementation of inclusive strategies in real-time.This qualitative data collection method aims to provide a nuanced understanding of the dynamics within inclusive learning environments.
-Document Analysis: Educational materials, policies, and curriculum documents related to inclusive education will be analyzed to contextualize the research within the institutional framework.Document analysis contributes valuable insights into the formal structures supporting inclusive practices.
Data Analysis:
-Quantitative Analysis: Thematic analysis will be applied to qualitative data gathered from interviews and classroom observations.This process involves identifying recurring themes, patterns, and nuances within the narratives of participants.
-Qualitative Analysis: Survey data will undergo statistical analysis, including descriptive statistics and inferential tests, to identify patterns, correlations, and significant differences.This quantitative analysis provides a broader overview of the prevalence and impact of inclusive practices.
Triangulation: Data triangulation, involving the comparison of findings from different data sources, will be employed to enhance the credibility and validity of the study.The convergence of evidence from interviews, surveys, observations, and document analysis contributes to a more comprehensive understanding of inclusive education practices.
Ethical Considerations: This research will adhere to ethical standards, ensuring informed consent, confidentiality, and respect for the rights of participants.Institutional review board (IRB) approval will be obtained before initiating data collection.
Limitations: Limitations may include potential biases in self-reported survey data and the contextual specificity of classroom observations.These limitations will be transparently acknowledged in the interpretation of results.
Result and Discussion
a The analysis and discussion segment of this study, "Inclusive Education Practices: Fostering Diversity and Equity in the Classroom," delves into the intricate dynamics of fostering inclusive education.This narrative aims to provide a comprehensive understanding of the practical strategies, challenges, and outcomes associated with inclusive practices, drawing from a mixed-methods research design.
Qualitative Insights: Through in-depth interviews with educators, a rich tapestry of qualitative insights emerged.Educators articulated the diverse strategies they employ to create inclusive classrooms, ranging from differentiated instruction to collaborative learning environments.The narratives illuminated the nuanced understanding these educators have of their students' needs and the importance of adapting pedagogical approaches to foster a sense of belonging.
Quantitative Findings: The quantitative analysis of surveys administered to educators and students provided a statistical lens on the prevalence and impact of inclusive practices.A majority of respondents affirmed the existence of inclusive strategies within their classrooms, with a notable consensus on the positive impact of such practices on both academic and socio-emotional outcomes.The quantitative data underscored the alignment between educators' perceptions and the perceived benefits reported by students.
Observations Unveiling Real-Time Dynamics: Direct observations of inclusive classrooms added a layer of real-time dynamics to the study.The qualitative richness derived from observing the implementation of inclusive strategies showcased the fluidity of interactions, the adaptability of educators, and the engagement of students.The observations underscored the importance of a dynamic and flexible teaching approach that caters to the diverse needs of learners.
Documentary Insights:
Analysis of institutional documents and policies revealed the formal structures supporting inclusive education.Document analysis highlighted the role of policy frameworks, professional development initiatives, and curriculum adaptations in institutionalizing inclusive practices.This provided a broader context for understanding the systemic support required for successful implementation.
Challenges Faced by Educators: The qualitative data, particularly from interviews, brought forth the challenges faced by educators in implementing inclusive education practices.Common challenges included resource constraints, varying levels of support, and the need for ongoing professional development.The narrative illuminated the resilience of educators in navigating these challenges, emphasizing the importance of collaborative efforts and a supportive institutional environment.
Triangulation of Data: The triangulation of findings from interviews, surveys, observations, and document analysis served as a robust methodological approach.Consistent themes emerged, enhancing the credibility and reliability of the study.Triangulation allowed for a holistic interpretation of the data, capturing the multi-dimensional aspects of inclusive education.
Implications and Future Directions: The integrated findings offer implications for both practitioners and policymakers.For educators, the study underscores the significance of flexibility, adaptability, and ongoing professional development in fostering inclusive classrooms.Policymakers can leverage the findings to refine and strengthen policy frameworks supporting inclusive education.Future research directions may involve longitudinal studies to assess the sustained impact of inclusive practices and further exploration of cultural and contextual factors influencing implementation.
Conclusion
In conclusion, the analysis and discussion provide a nuanced exploration of inclusive education practices, offering practical insights and theoretical contributions to the ongoing discourse.The integration of qualitative and quantitative data contributes to the broader understanding of how inclusive principles manifest in real-world classroom settings, ultimately aiming to enhance the educational experience for all learners. | 2024-01-24T18:52:44.632Z | 2023-12-25T00:00:00.000 | {
"year": 2023,
"sha1": "e857e55d422c3febd9e40d7b63dbd8cc6738c540",
"oa_license": "CCBY",
"oa_url": "https://global-us.mellbaou.com/index.php/global/article/download/46/53",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d84d81e0827f011b74200dc92f9993dc591c0fd2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
73737084 | pes2o/s2orc | v3-fos-license | Carbamazepine Induced Stevens-Johnson Syndrome : A Case Report
Stevens-Johnson syndrome is mucocutaneous cell mediated hypersensitivity reaction which affects 2 to 3 cases per million. SJS is generally rare, but potentially life threatening and commonly drug induced. We report a case of 12-year-old male child admitted to paediatric intensive care unit at a tertiary care hospital with chief complaints of fever, abdomen pain, vomiting erythematous lesions and macular rashes all over the body including face, neck, chest, both upper and lower limbs and abdomen. On the next day Bullous lesions were noted on the face and upper limbs, Conjunctivitis, Angioedema, Oral ulcers and blisters on angle of mouth were also observed. These signs and symptoms were started after taking Tab Carbamazepine 100 mg on the 8th day. Based on the patient signs and symptoms the diagnosis was confirmed as Carbamazepine induced Stevens Johnson syndrome. These Adverse drug reactions may lead to fatal organ failure and skin damage resulting in mortality. Pharmacovigilance which deals with the identification, assessment and prevention of ADRs can help in providing continuous information on medication safety use.
Introduction
Carbamazepine is associated with several dermatological adverse effects including rashes, urticaria, and photosensitivity reactions, whereas, severe and life threatening acute adverse cutaneous drug reactions such as erythema multiforme, toxic epidermal necrolysis (TEN) and Stevens-Johnson syndrome (SJS) are reported rarely. TEN is clinically characterized by erythematous macules and targeted lesions throughout the body along with more than 30 percent of body surface area having full thickness epidermal necrosis; whereas SJS have less than 10% body surface area affected with full-thickness epidermal necrosis with detachment along with mucous membrane involvement in two or more areas. 1 Most of the reported cases of SJS occur during first two months of antiepileptic drugs use. The estimated risk ranges between one and ten cases per 10,000 new users for carbamazepine, lamotrigine, phenytoin, phenobarbital, whereas lower rates have been reported for valproate. 2 Highest rates of SJS have been reported to occur with carbamazepine around 14/10,000 users. 2,3 SJS is a clinical syndrome presumed to be a hypersensitivity reaction manifested initially with prodromal symptoms of fever, malaise and a sore throat. The prodromal phase is then followed up to 14 days by an acute polymorphous dermatologic syndrome manifested as erythematous maculo-papular skin lesions, target lesions, bullae, vesicle, involvement of at least two mucus membranes, conjunctivitis and associated systemic toxic state. 4 SJS is severe, acute mucocutaneous reactions that are most often elicited by drugs and occasionally by infections. They are now considered to be differing only in the extent of body surface area involved. 5 The drugs commonly implicated as the cause of SJS are anticonvulsants, sulfonamides, non-steroidal antiinflammatory drugs and antibiotics. 6,7,8 Carbamazepine is prescribed in schizoaffective disorder and bipolar disorder as a mood stabilizer, and in seizure disorders, trigeminal neuralgia and chronic pain. It is associated with hypersensitivity reactions that range from benign urticaria to life-threatening cutaneous disorders, including SJS and TEN. 4,9,10 In psychiatry, cutaneous adverse drug eruptions are rarely noticed with atypical antipsychotics. To date, very few skin rashes and eruptions with olanzapine have been described in the literature. Dermatological side effects that have been reported with olanzapine are eruptive xanthomas, skin hyperpigmentation and purpura associated with thrombocytopenia. 11 Amongst other atypical antipsychotics, only two cases of erythema multiforme have been reported, one with ziprasidone, 12 and one with risperidone. 13 The SJS carry a mortality that can be as high as 30% and require early diagnosis, with prompt withdrawal of all suspected potential causative drugs.
Case History
A 60-year-old married male patient diagnosed with schizoaffective disorder, presented with increased talk, disturbed sleep and hyperactivity for past 3 months, following treatment with carbamazepine 200mg twice daily along with olanzapine 10mg twice daily. There was no family history of any psychiatric or physical illness, or drug reactions. Further history revealed that he had discontinued medications including carbamazepine three years back on his own due to mild rash. Then he had many psychotic and manic episodes, and had tried many mood stabilizers: lithium, sodium valproate and lamotrigine but did not get good result. In the past, he had improved with carbamazepine, so carbamazepine was restarted 200mg once daily. On his second day of medication, he had a mild fever and general weakness along with flushing of face and subsequently developed maculopapular rashes, starting from face spreading to neck then trunk and later developed to both legs, on 10 th day. On physical examination the patient had pruritic and stinging erythema and red maculopapular rashes on both legs (Figure-1). Pharynx, eyes and genital mucosa were not involved. Nikolsky's sign (mechanical pressure to the skin leading to blistering within minutes or hours) was positive. Laboratory examinations revealed high erythrocyte sedimentation rate 50 mm in the first hour; leukocyte count was 8000 per cubic milliliter; and other investigations were within normal limits.
All his medications were stopped and referred to dermatology department for further management. A diagnosis of drug-induced SJS was made by dermatologist. He was treated with dexamethasone injection 4 mg twice a day, ceftriaxone injection 1 gram twice a day and topical betamethasone. After 17 days his condition improved. Patient had a satisfactory recovery; and at the time of discharge, he had generalized desquamation and incomplete peeling of the skin on the trunk and both legs. He was reviewed in psychiatric department and was started with sodium valproate 1000 mg and olanzapine 10 mg per day. After three weeks, there was complete resolution of manic symptoms. There were no adverse effects.
Discussion
Carbamazepine has been strongly associated with SJS. Although it has multiple etiologies, it is commonly triggered by viral infections (herpes simplex virus is the infectious agent more commonly involved) and neoplasias (carcinomas and lymphomas). 14 However, the most common cause is the use of medications. Among the drugs implicated more often are allopurinol, antibiotics, anticonvulsants and non-steroidal anti-inflammatories.
Recently, in a seven-year study, Devi et al. concluded that anticonvulsants were implicated in most cases of SJS especially in the first eight weeks of treatment; and the main drug responsible was carbamazepine. 2 Typically, the initial presentation is marked by symptoms of fever, myalgia, and general weakness for 1 to 3 days before the development of cutaneous lesions. The skin lesions are symmetrically distributed on the face and upper trunk areas. The rash spreads rapidly and is usually maximal within four days, sometimes within hours. The initial skin lesions are usually poorly defined macules with darker purpuric centers that coalesce. Diagnosis is arrived at through clinical history and examination. However, skin biopsy helps to confirm the diagnosis, usually excluding bullous diseases not related to drug therapy. The patient in this case was exposed to carbamazepine twice; had mild rash in the first exposure few years back; but the degree of his cutaneous reaction was greater with the second exposure, when he developed SJS. Adverse reactions to drugs are reported to increase with age. 15 SJS is reported to affect females more frequently than males, but an Indian study showed a slight male preponderance. 16 Although SJS appears in all age groups but it is more common in older people, probably because of tendency to use more drugs. Most patients are in the second to fourth decades and onwards. 17 Mortality was observed more commonly in elderly patients. 18 It is possible that severity of SJS is greater at extreme of ages perhaps due to poor immune response as compared to adults. 19
Conclusion
Considering this case report of SJS associated with carbamazepine, it is suggested that carbamazepine readministration should be avoided in patients with a previous history of rash or SJS. In this regard, obtaining an accurate medical history is important. In addition, it is advisable to observe for any side effects while gradual titrating the dose at the start of treatment with carbamazepine. Awareness about drugs causing serious drug reactions such as SJS and TEN will help doctors prevent such reactions by judicious use of drugs and managing them adequately, reducing associated morbidity and mortality. | 2018-12-05T17:48:30.412Z | 2018-09-24T00:00:00.000 | {
"year": 2018,
"sha1": "1bb11caaccc9e13f0d15e18c2b1486fd7bbfa2a9",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2017/792",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1058bb7d025961f062351d39c1eeed7a0c128fb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202569679 | pes2o/s2orc | v3-fos-license | Pooled analysis of efficacy and safety of ureteral stent removal using an extraction string
Abstract Objective: We conducted a Pooled analysis to investigate the efficacy and safety of ureteral stent removal using an extraction string. Methods: A systematic review was performed by using the Preferred Reporting Items for Systematic Reviews and Pooled analyses. The sources including EMBASE, MEDLINE, and the Cochrane Controlled Trials Register were retrieved to gather randomized controlled trials of ureteral stent removal using an extraction string. The reference of included literature was also searched. Results: Four randomized controlled trials containing a amount of 471 patients were involved in the analysis. We found that the ureteral stent removal using an extraction string group had a greater decrease of visual analog scale (VAS) (Mean difference (MD) −1.40, 95% confidence interval (CI) −1.99 to −0.81, P < .00001) compared with the no string group. The string group did not show a significant differences in Ureteric Stent Symptom Questionnaire (USSQ) (P = .15), general health (P = .77), stent dwell time (P = .06), and urinary tract infection (UTI) (P = .59) with exception of stent dislodgement (Odds Ratio (OR) 10.36, 95% CI 2.40 to 44.77, P = .002) compared with the no string group. Conclusions: Ureteral stent removal by string significantly provides less pain than those by cystoscope for patients without increasing stent-related urinary symptoms or UTI. However, this must be balanced against a risk of stent dislodgement and, hence, may not be a good option in all patients.
Introduction
As the development of endoscopic technology, the indications for retrograde endoscopic therapy to manage urolithiasis have expanded. These endourologic advancements have brought about not only less invasiveness but also higher stone-free rates for patients with urolithiasis. [1] Auge and colleagues reported that 80% of urologists placed a stent after uncomplicated ureteroscopy for stone disease. [2] And most urologists actually insert the stent to avoid stressful emergencies and allow it to remain for 1 to 2 weeks after ureteroscopy. [2] However, importance should also be attached to the quality of life (QoL) of patients as urolithiasis is a benign disease. Previous studies have shown that placing a ureteric stent increases postoperative patient morbidity and negatively affects the patient's QoL. [3] Besides, the additional suffering due to cystoscopic extraction is even more painful. Previous studies have shown that cystoscopy remains a potentially painful procedure, after which gross hematuria, urinary frequency, and dysuria can occur more frequently than expected. [4] Current ureteral stents are manufactured with a string attached to the distal end, allowing for removal without cystoscopy, which may lead to a improvement of patient's QoL. The method is advantageous, but there is wide variability in its clinical application. The rationale behind this is thought to be due to concerns over perceived risks, including increased lower urinary tract symptoms (LUTS) from string irritation, stent dislodgement, infection, stent retention due to patients forgetting to remove stents, broken strings, and lack of strong evidence relating to its safety and tolerability. [5,6,7] A systematic review published in 2016 only included a small number of sample size and lacked of well-designed multicenter randomized controlled trials (RCTs), which resulted in insufficient evidence for the conclusion. [8] Due to paucity in the available literature, we conducted a Pooled analysis of RCTs to evaluate the efficacy and safety of ureteral stent removal using an extraction string.
Study protocol
This study was implemented by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. [9] Only randomized controlled studies were included in our study. Observational studies, editorials, commentaries, and review articles were excluded. The abstracts of conference were also excluded. If there was more than 1 publication resulting from the same patient cohort, the most recent publication would be used to analyze.
Information sources and literature search
Based on databases including MEDLINE (1996 to April 2019), EMBASE (1999 to April 2019) and the Cochrane Controlled Trials Register, we did a comprehensive retrieval to analyze the efficacy and safety of ureteral stent removal using an extraction string. The subject headings and text-word terms were as follows: " ureteral stent, removal, string, and randomized controlled studies". The study only included published literature with restriction on English language articles. If necessary, authors of the article retrieved were communicated to provide more accurate data for their research. Meanwhile, our study searched for published systematic reviews and other key references. Two investigators screened independently titles and abstracts to identify studies that met the inclusion criteria. When the abstract was insufficient to determine if the study met inclusion criteria, full-text review would be required.
Inclusion criteria and trial selection
Inclusion criteria was as follows: 1. ureteral stent removal using an extraction string was involved; 2. full-text could be acquired; 3. the data provided by the article was valid and valuable, mainly involving the number of cases and valuable results of each indicator; 4. The method of article was a randomized controlled trial; 5. Each study might be added to our study if a group of participants took part in in multiple studies.
The PRISMA diagram of selection was shown in Figure 1.
Quality assessment methods
Our study classified the quality of each study by Jadad scale. [10] Additionally, some measurable methods of assessment were used to assess the quality of the individual studies, including distributive method, concealing distributive process, blindness of process, results of loss to follow and whether there is calculation of sample size or intention-to-treat (ITT) analysis. Studies were graded in line with the principles which derived from the Cochrane Handbook for Systematic Reviews of Interventions v5.10.
[11] Each RCT was allotted according to following quality classification standards: Satisfying almost all of the quality criteria, study would be considered to have a low probability of bias; Satisfying the partial quality criteria or unclear, the study was thought of having a secondary probability of bias; or Satisfying bare quality criteria, the study was considered to have a high probability of bias.
All authors participated in the assessment of each RCT, eventually everyone agree with the results. All reviewer independently assessed whether the study fitted into the criteria. Any discrepancies were recorded, discussed, and settled among authors.
Data extraction
Based on predetermined criteria, 2 authors independently extracted relevant data from each article. The measurable data was extracted from included studies: Abbreviations of first author' name; Published time; Country of study; Technique received; The type of method; Number of participants; Mean age; Data on visual analog scale (VAS), Ureteric Stent Symptom Questionnaire (USSQ), general health, stent dwell time, urinary tract infection and stent dislodgement.
No ethical approval was required for this study.
Statistical analyses and meta-analysis
RevMan version 5.3.0 (Cochrane Collaboration, Oxford, UK) [11] was used to the analysis of data. Fixed or random effects models were applied to assess the study. Mean difference (MD) were applied to evaluate continuous data and odds ratio (OR) for dichotomous results with the corresponding 95% confidence interval [CI]. [12] If P value > .05, the study was Table 1 The details of individual study. homogeneous, and fixed-effect model was used in our study. The study analyzed variance by Tau 2 and inconsistency by using I 2 statistic that reflected the proportion of heterogeneity in data analysis. A random effect model would be used for results when the I 2 value is greater than 50% and has significant heterogeneity. If P value was less than .05, the result was considered to have statistically significant.
Study selection process, search results, and characteristics of the trials
Our search found 46 articles by retrieving 3 databases. Screening abstracts and titles, we excluded 26 articles. For remaining 20 articles, 14 articles were excluded because of lack of available data and 2 articles were excluded due to the same experiment (details in Fig. 1). Finally, 4 articles containing 4 RCTs [5,[13][14][15] were involved to evaluate the efficacy and safety of ureteral stent removal using an extraction string. The details of 4 articles were listed in Table 1. Patients with ureteral stent removal using an extraction string included in each study showed similar evaluation index.
Risk of bias in studies
All studies included in the analysis were random control study. All studies had a appropriate calculation of sample size and no study showed an intention-to-treat analysis. All of the included studies demonstrated a higher quality with Jadad scores rating A ( Table 2). The plot was highly symmetrical and 4 squares were contained in the large triangle, and no obvious evidence of bias was found (Fig. 2).
Primary outcomes 3.3.1. VAS.
Three RCTs gathering a total of 331 patients (165 in the string group and 166 in the no string group) contributed to access VAS data. The forest plot demonstrated that the string group had a lower VAS score (MD À0.14, 95% CI À1.99 to À0.81, P < .00001) (Fig. 3) compared with the no string group. Besides, the VAS scores for males and females were both significantly less in the string group (P < .00001 and P < .001) (Fig. 3). (Fig. 4). We found no statistically significant between string group and no string group in the USSQ.
General health.
Three RCTs evaluated the general health with a sample of 331 patients (165 in the string group and 166 in the no string group). The forest plots showed a MD of 0.17 and 95% CI of À0.98 to 1.32 (P = .77) (Fig. 4). No statistically significant was found between string group and no string group in general health. . The model did not show a marked differences between the 2 groups in the duration of stent dwell time (MD À2.86, 95% CI À5.80 to 0.08, P = .06) (Fig. 4).
UTI.
Four RCTs with a sample of 471 patients (223 in the string group and 248 in the no string group) evaluated the rates of UTI. The study showed that there is no statistically significant difference between string group and no string group in the incidence of UTI (OR 1.27, 95% CI 0.53 to 3.09, P = .59) (Fig. 5).
3.3.6. Stent dislodgement. Four RCTs evaluated the stent dislodgement with a sample of 471 patients (223 in the string group and 248 in the no string group). The fixed-effects estimate of OR was 10.36, and the 95% CI was 2.40 to 44.77 (P = .002) (Fig. 5). This result indicated that the risk of stent dislodgement was higher in the string group compared with no string group.
Discussion
Ureteral stent has been used to facilitate urinary drainage to bladder since 1960s. [16,17] Although benefits in certain patients are clear, indwelling stent present their own set of problems to the patients while in situ and subsequently during their removal. The conventional ureteral stent removal usually requires an elective appointment slot, nursing, medical staff provision, and sometimes potentially even a general anaesthesia. Equipment is also needed, such as a cystoscope, fuid irrigation, camera stack, and stent graspers. Cystoscopy itself is associated with a small risk of morbidity. [19] Besides, travelling to and from the hospital for multiple appointments can be cumbersome and costly for the bulk of patients. We made this meta-analysis from 4 high quality RCTs including 471 participants to compare the ureteral stent removal using an extraction string with conventional method. Visual Analogue Scale/Score (VAS), this method is more sensitive and comparable. Draw a 10 cm horizontal line on the paper. One end of the horizontal line is 0, indicating no pain; the other end is 10, indicating severe pain; the middle part indicates different degrees of pain. The ureteral stent symptom questionnaire (USSQ), a psychometrically valid measure to evaluate symptoms and impact on quality of life of ureteral stents. Compared with conventional cystoscope method, using an extraction string to removal the ureteral stent had a greater decrease of VAS. Besides, patients with the use of extraction string did not show a significant difference in USSQ, general health, stent dwell time and UTI.
Besides, Kim et al [14] did a randomized controlled study focused on evaluating patients' preference for ureteral stent removal using an extraction string. As results, they found that most patients preferred removal of the ureteral stent using an extraction string.
Previous systematic review published by Oliver et al [8] found that overall stent dwell time was lower in patients who had their stents removed via extraction strings, which is different from our conclusion. Because of the inclusion of case-control study (CCS) and cohort study, strength of evidence of previous article is relatively weak. Inoue et al [15] and his colleagues reported that ureteral stent removal by string after ureteroscopy significantly provides less pain than those by cystoscope for male patients but not for females. This is also different from our subgroup analysis. Besides, no meta-analysis has been published with respect to this question so far. On all accounts, more high-quality RCTs with suitable study cohorts are needed to confirm our findings.
In respects of stent dislodgement, we found that the string group had a higher rate of the disadvantage compared with no string group. Althaus et al [18] reported that when stratified by gender, 5.3% of men and 24.4% of women with a stent string experienced stent dislodgment (P = .013) and women experienced stent dislodgment 4-fold more often than men. The higher rate of stent dislodgment in women may be related to the shorter urethral length or incidentally tugging at the stent string when bathing or voiding. So, we recommend that patients pay great attention and do not tug the stent string when taking a bath or voiding. Reducing healthcare costs is another advantage of stent extraction string. Barnes et al [5] conducted the trial estimated avoiding the need for second hospital visit and cystoscopy for stent removal resulted in savings in their hospital. Bockholt et al [19] report an estimated $1300/ patient cost associated with cystoscopic stent removal, which would be avoided by patients performing home stent extraction using strings. Liu et al [14] also demonstrated that patients with extraction string had less costly (8.97 ± 3.07 vs 455 ± 0 CNY, P = .001) for ureteral stent removal. And the overall cost of patients without an extraction string was significantly more than in patients with an extraction string (86.7 ± 167.7 vs 507.9 ± 147.8 CNY, P = .008).
This pooled-analysis includes studies which are all findings from randomized double-blind, placebo-controlled trials.
According to the quality-assessment scale that we developed, the quality of the individual studies in the pooled analysis was conforming. The results of this analysis acquire great importance from scientific standpoint, but also in the everyday clinical practice. However, the number of included studies were not many. Selection bias, subjective factors, and publication bias may also affect the final results of our study. One limitation of our findings is some variables, such as stone size, stone location, the skill and experience of the operating surgeon and efficacy of perioperative care. In addition, unpublished studies' data were not included in the analysis. These factors may have resulted in a bias. More high-quality trials with larger samples are proposed to learn more about the efficacy and safety of ureteral stent removal using an extraction string.
Conclusion
Ureteral stent removal by string significantly provides less pain than those by cystoscope for patients without increasing stentrelated urinary symptoms or UTI. However, this must be balanced against a risk of stent dislodgement and, hence, may not be a good option in all patients. | 2019-09-14T13:05:36.728Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "dd8dac459795f8f2f206efcd582074a0a3a58fb3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000017169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d43b61431aecd0010635b8099d79a86e7eb8dad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244270210 | pes2o/s2orc | v3-fos-license | Triggering phase-coherent spin packets by pulsed electrical spin injection across an Fe/GaAs Schottky barrier
The precise control of spins in semiconductor spintronic devices requires electrical means for generating spin packets with a well-defined initial phase. We demonstrate a pulsed electrical scheme that triggers the spin ensemble phase in a similar way as circularly-polarized optical pulses are generating phase coherent spin packets. Here, we use fast current pulses to initialize phase coherent spin packets, which are injected across an Fe/GaAs Schottky barrier into $n$-GaAs. By means of time-resolved Faraday rotation, we demonstrate phase coherence by the observation of multiple Larmor precession cycles for current pulse widths down to 500 ps at 17 K. We show that the current pulses are broadened by the charging and discharging time of the Schottky barrier. At high frequencies, the observable spin coherence is limited only by the finite band width of the current pulses, which is on the order of 2 GHz. These results therefore demonstrate that all-electrical injection and phase control of electron spin packets at microwave frequencies is possible in metallic-ferromagnet/semiconductor heterostructures.
I. INTRODUCTION
The preparation and phase-controlled manipulation of coherent single spin states or spin ensembles is fundamental for spintronic devices 1,2 . Devices based on electron spin ensembles requires for spin coherence an initial triggering of the phase of all the individual spins, which results in a macroscopic phase of the ensemble. Such a phase triggering can easily be obtained by circularly polarized ultrafast laser pulses, which are typically shorter than one ps. 3,4 By impulsive laser excitation, all spins of the ensemble are oriented in the same direction, i.e. they are created with the same initial phase. Spin precession of the ensemble can be monitored by time-resolved magneto-optical probes as the spin precession time is usually orders of magnitude longer than the laser pulse width. Along with other techniques, these time-resolved all-optical methods have been used to detect spin dephasing times 3,5-7 , strain-induced spin precession 8,9 and phase-sensitive spin manipulation in lateral devices 8,10,11 .
Spin precession can also be observed in dc transport experiments [12][13][14][15][16][17] . In spin injection devices, for example, electron spins are injected from a ferromagnetic source into a semiconductor [18][19][20][21][22][23][24][25][26][27][28][29] . Their initial spin orientation near the ferromagnet/semiconductor interface is defined by the magnetization direction of the ferromagnet. Individual spins start to precess in a transverse magnetic field. This results in a rapid depolarisation of the steady-state spin polarisation (the Hanle effect), because spins are injected continuously in the time domain. The precessional phase is preserved partially when there is a well-defined transit time between the source and the detector 12, 15 . This has been achieved in Si by spinpolarized hot electron injection and detection techniques operated in a drift-dominated regime, which allowed for multiple spin precessions 15,16 , while only very few precessions could be seen in GaAs-based devices 12,13 . On the other hand, pulsed electrical spin injection has been reported 26,27 , but no spin precession was observed. Despite recent progress in realizing all-electrical spintronic devices, electrical phase triggering is missing.
Here, we use fast current pulses to trigger the ensemble phase of electrically generated spin packets during spin injection from a ferromagnetic source into a III-V semiconductor. Coherent precession of the spin packets is probed by time-resolved Faraday rotation. Our device consists of a highly doped Schottky tunnel barrier formed between an epitaxial iron (Fe) and a (100) oriented n-GaAs layer. We chose this device design for three reasons: (I) the Schottky barrier profile guarantees large spin injection efficiencies 21,24,30,31 , (II) the n-GaAs layer is Si doped with carrier densities near the metal-insulator transition (n = 2 − 4 × 10 16 cm −3 ) which provides long spin dephasing times T * 2 for detection 3,32,33 and (III) the Fe injector has a two-fold magnetic in-plane anisotropy 34 , which allows for a non-collinear alignment between the external magnetic field direction and the magnetization direction of the Fe layer and thus the spin direction of the injected spin packets. This non-collinear alignment is needed to induce Larmor precession of the spin ensemble. We observe spin precession of the electrically injected spin packets for current pulse widths down to 500 ps. The net magnetization of the spin packet diminishes with increasing magnetic field. We link this decrease to the high-frequency properties of the Schottky barrier. Its charging and discharging leads to a broadening of the current pulses and hence temporal broadening of the spin packet as well as phase smearing during spin precession. We introduce a model for ultrafast electrical spin injection and extract a Schottky barrier time constant from our Faraday rotation data of 8 ± 2 ns, which is confirmed by independent high-frequency electrical characterization of our spin device.
II. EXPERIMENT
Our measurement setup and sample geometry are depicted in Fig. 1a. The sample consists of an Al-capped 3.5-nm thick, epitaxially grown Fe(001) layer on n-doped Si:GaAs(001). The doping concentration of the 15-nm thick n + -GaAs layer starting at the Schottky contact is 5 × 10 18 cm −3 followed by a 15 nm n + /n transition layer with a doping gradient, a 5-µm thick bulk layer with doping concentration 2 × 10 16 cm −3 and a highly doped (∼ 1 × 10 18 cm −3 ) GaAs substrate (layer stack details in Fig. 3c). The sample mesa with 650 µm radius is etched down to the substrate. The T * 2 of the substrate is smaller than 1 ns. The magnetic easy axis of the Fe layer is oriented along the GaAs [011] (±x direction). Comparison of electrical and all-optical Hanle measurements indicates a spin injection efficiency into the bulk n-GaAs layer of ∼ 7% for a wide bias range. The differential resistance of the layer stack and the magnetic characterization of the Fe layer is shown in Appendix A.
Samples are mounted in a magneto-optical cryostat kept at 17 K with a magnetic field B z oriented along the ±z direction. For time-resolved electrical spin injection, a voltage pulse train (amplitude 1.8 V) from a pulse generator (65 ps rise and fall time) is applied via a bias-tee to the sample, which is placed on a coplanar waveguide within a magneto-optical cryostat. Linearly polarized laser pulses at normal incidence to the sample plane and phase-locked to the electrical pulses monitor the ±y component of spins injected in the GaAs by detecting the Faraday rotation angle θ F . The linearly polarized laser pulses (P = 200 µW with a focus diameter ≈ 50 µm on the sample) are generated by a picosecond Ti-sapphire laser with a stabilized repetition frequency of 80 MHz. They are phase-locked to the voltage pulses and can be delayed by a time ∆t up to 125 ns with a variable phase shifter with ps-resolution. The laser energy 1.508 eV is tuned to just below the band gap of the GaAs. The repetition interval of the pump and probe pulses can be altered from 12.5 ns to 125 ns by an optical pulse selector and the full width at half maximum ∆w of the voltage pulses can be varied from 100 ps to 10 ns. Both pump and probe pulses are intensity-modulated by 50 kHz and 820 Hz, respectively, in order to extract the pump induced θ F signal by a dual lock-in technique.
A. Static spin injection
We first use static measurements of the Faraday rotation to demonstrate electrical spin injection in our devices (Fig. 1b). The sample is reverse biased, i.e. positive voltage probe on GaAs, and spins are probed near the fundamental band gap of GaAs. At B z = 0 T, spins are injected parallel to the easy axis direction of the Fe layer yielding θ F = 0. At small magnetic fields B z , spins start to precess towards the y-direction yielding θ F = 0. θ F is a direct measure of the resulting net spin component S y . Changing the sign of B z inverts the direction of the spin precession which results in a sign reversal of θ F . As expected 12 , the direction of spin precession also inverts when the magnetization direction of the Fe layer is reversed (see red curve in Fig. 1b). θ F approaches zero at large fields, since the continuously injected spins dephase due to Larmor precession causing strong Hanle depolarisation.
B. Time-resolved spin injection
For time-resolved spin injection experiments, we now apply voltage pulses with a full width at half maximum of ∆w = 2 ns and a repetition time of T rep = 125 ns with T rep > T * 2 . The corresponding time-resolved Faraday rotation data are shown in Figs. 2a and 2b at various magnetic fields. Most strikingly, we clearly observe Larmor precessions of the injected spin packets demonstrating that the voltage pulses trigger the macro-phase of the spin packets. It is apparent that the amplitude of θ F is diminished with increasing |B z |. We note that the oscillations in θ F are not symmetric about the zero base line (see black lines in Fig. 2a as guides to the eye). For quantitative analysis we use with ω L = gµ B B/ , where g, µ B and denote the effective electron g factor, the Bohr magneton and the reduced Planck constant and φ being a phase factor. The second term accounts for the non-oscillatory time dependent background with a lifetime τ bg and an amplitude A bg (The magnetic field dependence of A bg is shown in the Supplemental Material 35 ). The least-squares fits to the experimental data are shown in Fig. 2a as red curves. We determine a field independent τ bg = 8 ± 2 ns and deduce |g| = 0.42 ± 0.02 from ω L as expected given that the spin precession is detected in the bulk n-GaAs layer 3 . The extracted spin dephasing times T * 2 (B z ) and amplitudes A(B z ) are plotted in Figs. 2c and 2d, respectively. The longest T * 2 (B z ) values, which exceed 65 ns, are obtained at small magnetic fields. The observed 1/B dependence of T * 2 (B z ) (see red line in Fig. 2c), which indicates inhomogeneous dephasing of the spin packet, is consistent with results obtained from all-optical timeresolved experiments on bulk samples with similar doping concentration 3 . On the other hand, the strong decrease of A(B z ) with magnetic field (Fig. 2d) has not previously been observed in all-optical experiments. Note that spin precession is barely visible for magnetic fields above 30 mT.
The A(B z ) dependence might be caused by the B z field acting on the direction of the magnetization M F e of the Fe injector. Increasing B z rotates M F e away from the easy (x-direction) towards the hard axis (z direction) of the Fe layer. This rotation diminishes the x component of the magnetization vector of the injected spin packet, which would result in a decrease of A(B z ). We calculated this dependence (see dashed line in Fig. 2d) for a macrospin M F e using in-plane magnetometry data from the Fe layer (see Fig. 6). The resulting decrease is, however, too small to explain our A(B z ) dependence.
To summarize, there are two striking observations in our time-resolved electrical spin injection experiments: (I) the strong decrease of the Faraday rotation amplitude A(B z ) and (II) the non-oscillatory background in θ F (∆t) with a field independent time constant τ bg = 8 ± 2 ns. As both have not been observed in time-resolved all-optical experiments, it is suggestive to link these properties to the dynamics of the electrical spin injection process.
In our time-resolved experiment, electron spin packets are injected across a Schottky barrier by short voltage pulses. The depletion layer at the barrier acts like a capacitance. When a voltage pulse is transmitted through the barrier, the capacitance will be charged and subsequently discharged. For studying the effect of the charging and discharging on the spin injection process, we per- formed high-frequency (HF) electrical characterization of our devices.
C. High-frequency sample characteristic
The HF bandwidth of the sample is deduced from the reflected electrical power S 11 by vector network analysis as shown in Fig. 3a. More than half of the electrical power (S 11 > 3 dB) is reflected from the device for frequencies above ∼ 1.5 GHz. This bandwidth is independent of the operating point over a wide dc-bias range from -2.0 V (reverse biased Schottky contact) to 1.0 V and allows the sample to absorb voltage pulses of width ∆w 500 ps. Furthermore, the time evolution of the voltage drop at the Schottky barrier, i.e., its charging and discharging, can directly be determined by time-domain reflectometry (TDR). To analyse the charging dynamics of the Schottky capacitance, we apply a voltage step to the sample with an amplitude of -1 V and a rise time of 100 ps. The time-evolution of the reflected voltage step is shown in Fig. 3b. Note that there is a significant temporal broadening of the voltage step. We obtain a similar time constant for the discharging behaviour (not shown). Any impedance mismatch along the 50 Ω transmission line can be detected by measuring the time evolution of the reflected voltage. A real impedance above 50 Ω yields a reflected step function with negative amplitude. If the transmission line is terminated by a capacitance, the time evolution of the voltage drop during charging of the capacitance equals the time dependence of the reflected voltage. Note that even after 15 ns the voltage pulse is not fully absorbed by the sample, i.e. about 10 % of its amplitude is still being reflected. As long as the pulse is applied the absolute amplitude of the reflected voltage will rise towards saturation, which is reached at full charging up of the capacitance (Further information is provided in the Supplemental Material 35 ).
To further link the HF dynamics of the Schottky barrier to the pulsed electrical spin injection process, we depict a simple equivalent network of the sample in Fig. 3c. In the reverse-bias regime, the Schottky contact can be modeled by a Schottky capacitance C s and a parallel tunnel-resistance R s . The underlying n-GaAs detection layer is represented by a resistance R in series. We assume the displacement current I c to be unpolarized, while the tunneling current I t carries the spin polarized electrons. The spin current I p = ηI t is given by the spin injection efficiency η. The charging and discharging of the Schottky capacitance is thus directly mapped to the temporal evolution of the spin current. I p increases after the voltage pulse is turned on, whereas it decreases after the pulse is turned off after time ∆w, i.e. during the discharge of C s . If C s , R s and η are approximately biasindependent, the increase and decrease of I p is singleexponential It is important to emphasize that the temporal width of the electrically injected spin packet is determined by τ sch . This temporal broadening becomes particularly important when individual spins start to precess in the external magnetic field at all times during the spin pulse. The retardation of spin precession results in spin dephasing of the spin packet. This phase "smearing" leads to a decrease of the net magnetization. Its temporal evolution can be estimated by where r S (t) = I p (t)/a is the spin injection rate with the active sample area a and where M 0 is given by an exponentially damped single spin Larmor precession. The integral can be solved analytically 35 and results in a form as given qualitatively by Eq. 1 describing the dynamics of the injected spin packets, assuming I p (0) = 0, i.e., d = 1. Note that the non-precessing background signal of θ F (see Fig. 2a) stems from the discharging of the Schottky capacitance, i.e. τ sch = τ bg , while T * 2 is not affected by the integration. This assignment is confirmed by the independent determination of τ sch by TDR. The amplitude A(B z ) in Eq. 1 becomes a function of ω L , T * 2 , τ sch , ∆w and r s (see Eqs. S17 and S22 of the Supplemental Material 35 . For simulating A(B z ), we take the above fitting results from Fig. 2, i.e. T * 2 (B z ), ω L , as well as ∆w = 2 ns and vary only τ sch as a free parameter. The resulting field dependent amplitudes are plotted in Fig. 2d at various time constants τ sch . The experimental data are remarkably well reproduced for the τ sch values determined by TDR (τ sch = 6 ns) and by the nonoscillatory background of θ F (τ sch = 8 ns). This demonstrates that the charging and discharging of the Schottky capacitance is the main source of the amplitude drop in our experiment.
D. Resonant spin amplification
We now analyse the precession of the spin packets after injection with voltage pulses of different width ∆w. This can better be tested as a function of B field instead of in the time domain. To enhance the signal-to-noise ratio of θ F , we reduce T rep to 12.5 ns. As T rep is now shorter than T * 2 , spin packets from subsequent voltage pulses can interfere. We thus enter the regime of resonant spin amplification (RSA) 3,4 . The net RSA magnetization M y,RSA results with Eq. 3 in where M RSA and r S are periodic in T rep and defined in the time interval [0, T rep ). Constructive interference of subsequent spin packets leads to periodic series of resonances as a function of B, if a multiple of 1/T rep equals the Larmor frequency: where z is an integer. Fig. 4a shows RSA scans for ∆w ranging between 500 ps and 10 ns taken at fixed ∆t and normalized to ∆w. Multiple resonances are observed for short ∆w ≤ 2 ns. The strong decrease of the resonance amplitudes with the increase of |B z | is consistent with the time-domain experiments (see Fig. 2). The number of resonances, which equals the number of Larmor precession cycles, subsequently decreases for broader current pulses. We observe a continuous crossover to the Hanle regime for the broadest pulses of ∆w = 10 ns ∼ T rep = 12.5 ns, which is close to the dc-limit of spin injection as shown in Fig. 1b. This crossover strikingly demonstrates the phase triggering by the current pulses. While pulse-width induced phase smearing is observed above ∆w = 1.5 ns, there are no effects of the pulse width below 1.5 ns due to the finite τ sch . Remarkably, pulsed spin injection is possible for ∆w as short as 500 ps.
The RSA scans are simulated using Eqs. 2 and 4 with τ sch = 6 ns and are depicted in Fig. 4b. The dependence on B z as well as the phase "smearing" with increasing pulse width are well reproduced. Note that even the change of the RSA peak shape for higher order resonances is reproduced by the simulations, demonstrating that our model explains all salient features of the experiment.
IV. CONCLUSION
In conclusion, we have shown that fast current pulses can trigger the macroscopic phase of spin packets electrically injected across an Fe/GaAs Schottky barrier. Current pulses having a width down to 500 ps trigger a spin imbalance observed as magnetic oscillations matching the effective electron g-factor of GaAs. Charging and discharging of the Schottky barrier yield a temporal broadening of the spin packets resulting in a partial dephasing during spin precession. This partial spin dephasing manifests itself in a characteristic decrease of the oscillation amplitude as a function of the magnetic field and as a non-oscillating exponential decrease of the injected spinmagnetization. Our model fully captures both of these features, which have not appeared when using ultra-fast laser pulses for optical spin orientation, and it predicts that the time constant of the decreasing background is given by the discharging time constant of the Schottky barrier. This time constant independently determined by time-domain reflectometry well matches our observations of the phase smearing of the spin packet. Using a ten time higher frequency of the current pulses, we superimpose injected spin packets in GaAs and enter the regime of resonant spin amplification, which is well-covered by our model as well. Our model predicts that the phase smearing can be significantly suppressed by reduction of the the Schottky capacitance. In this respect spin injection from diluted magnetic semiconductors will be advantageous for realizing all-electrical coherent spintronic devices of high frequency bandwidth. In Fig. 2 of the main article, a repetition frequency of the pump/probe pulses of T rep = 125 ns is used. At a repetition frequency of T rep = 12.5 ns (Fig. 4), which is lower than the spin coherence time T * 2 , the injected spin pulses start to superimpose and we observe resonant spin amplification. Here we consider a repetition period T rep = 125 ns for the probe laser-pulse. In addition to the electrical pump pulse at ∆t = 0, up to four additional current pulses can be applied each delayed by an additional 25 ns, while the repetition of probe pulses is kept at 125 ns. Hence, we observe the superposition of injected spin packets in the time domain. First, we apply pump pulses with a repetition period of 25 ns and choose B = 6.6 mT for the magnetic field, such that the Larmor period equals the pump-pulse period: Sequential spin packets constructively superimpose and we enter the resonant spin amplification regime (Fig. S1a). The period of the θ F signal equals the pump repetition frequency as expected. In Fig. S1b, we leave out the last two pump pulses of the sequence within the 125 ns probe period. Accordingly, we observe the rise of the magnetization due to the constructive superposition of the first three spin-polarized current pulses in the first half of the probe-pulse period, followed by a decrease of the magnetization due to dephasing of the spin packets in the second half of the probe-pulse period.
S2. MODEL AND DERIVATIONS
In this section, we derive the fit Eq. 1 of the article from the ansatz Eq. 3. This calculation yields the expression of the simulated decrease of the Faraday rotation amplitude A(B) as a function of the transverse magnetic field B as plotted in Fig. 2d of the article.
Our model is based on the equivalent circuit diagram (Fig. 3c of the article), in which the Schottky contact is replaced by a capacitance C s and a parallel resistance R s . The spin-polarized current I p (t) from the Fe-injector into the semiconductor is only transmitted by the tunnel current I t (t) through the Schottky resistance R S . The displacement current I c (t) through the capacitance is assumed to be unpolarized. In order to determine I p (t), we use a local model neglecting runtime effects, since the distance of the elements in the equivalent circuit diagram is by far smaller than the electric AC wavelength used. From Fig. 1 it can be deduced that the Schottky contact in the reverse bias regime (U < 0 V) is mainly ohmic. Since we use voltage pulses with an amplitude as high as −1.8 V, we can for simplicity neglect the weak bias-dependence of the tunnel resistance. When an external negative bias pulse of width ∆w is applied to the sample at time t = 0, the Schottky capacitance starts to charge. If we assume for simplicity that the magnitude of the capacitance is constant, the I t (t) through the parallel resistor starts to rise exponentially and approaches the tunnel current I t,dc , at which the Schottky capacitance would be fully charged. When the applied external bias is switched off at time t = ∆w, the Schottky capacitance starts to discharge. This yields an exponential decrease of the voltage dropping across the parallel resistance R S , and thus the tunnel current I t decreases exponentially with the characteristic decay time denoted τ sch . If we further assume that the spin injection efficiency η is bias-independent, the polarized current is I p (t) = ηI t (t). In the case of a dc-bias applied to the sample, the spin injection rate reaches its maximum value I p,dc = ηI t,dc . Hence, if a single voltage pulse is applied starting at time t = 0, the polarized current is with the time constant τ sch for charging and discharging the Schottky capacitance. The constant d is determined by the boundary conditions, i.e. the charging state of the Schottky capacitor, when the next current pulse arrives. For example, for a pulse repetition time T rep much longer than τ sch , the Schottky capacitor is fully discharged and the spin-polarized current is I p (0) = I p (T rep ) = 0 and hence d = 1. The time-evolution of the voltage dropping at the sample and thus τ sch can be determined directly from time-domain reflectometry as plotted in Fig. 3b of the article. Now, we calculate the effect of the time-dependent polarized current I p (t) on the time-evolution of the observed Faraday rotation signal θ F (∆t, B) in a transverse magnetic field B. The precession of a coherent spin packet in a transverse magnetic field is observed by its magnetization M. Let us start the calculation with the simple case of a purely coherent spin injection, i.e. all spins are injected exactly at the same time, with a magnetization denoted by M 0 . This case is relevant for optical spin orientation by an ultra-short laser pulse, which is much shorter than the Larmor precession period of the oriented electron spins. Since the electrically injected spins are pumped perpendicular to the observation direction, which is determined by the probe laser beam, θ F (∆t) is then proportional to M ⊥ 0 (∆t, B) where ∆t, T * 2 , ω L denote the pump-probe delay, the spin dephasing time, the Larmor frequency, respectively. Using a complex M 0 (∆t, B) with M ⊥ 0 (∆t, B) = (M 0 (∆t, B)), the calculation becomes independent of the observation direction: The proportionality factor depends on the number of injected electrons and the magnetic moment of a single electron. It does not depend on the external magnetic field B. Now, we take into account that the spins are injected slowly compared to the Larmor precession frequency. Thus, the first electron spins already precess, when further electrons are injected in the direction given by the static magnetisation of the iron layer. The probe laser measures the total magnetization M induced by the injected spins, by θ F : Despite its closed form, Eq. S4 is a complex integral over retarded purely coherent spin precessions M 0 (t, B), which is not suitable for data fitting. In the experiment, however, we observe the precessing net magnetization of the total injected spin ensemble by the Faraday rotation of the probe beam. In the following, our goal is to transform Eq. S4 to Eq. 1 of the article used to fit the measured θ F (∆t, B) signal. We start discussing M (∆t, B) separately during the charging (0 ≤ ∆t < ∆w) and discharging (∆w ≤ ∆t < T rep ) process of the Schottky contact and define:
M
(2) dis (∆t , B) denotes the total precessing magnetization at time ∆t , if spins are injected with an exponentially damped injection rate starting at ∆t = 0. Hence, it is due to the exponentially damped tail of the polarized current in Fig. 3d of the main article. In order to simplify M (2) dis (∆t , B), we briefly neglect the B and ∆t -independent real prefactor and focus on solving the integral and write for brevity γ = 1/T * 2 and λ = 1/τ sch : (S10) Since we observe the spins by Faraday rotation parallel to the y-direction and thus perpendicular to their original polarization direction in the Fe layer (parallel to x-direction), we are interested in M In the last step, we introduced the unit-less constant for brevity: Adding the real prefactor from Eq. S9, we find finally: . (S12) All the B-field dependence of M ⊥(2) dis (∆t , B) is given by ω L . The proportionality factor is still independent of B. Strikingly, the evolution of the calculated net magnetization in Eq. S12 is the sum of an exponentially decreasing background with the characteristic time constant τ sch of the Schottky contact: and an exponentially damped oscillation M ⊥(2) osc (∆t , B) with the time constant T * 2 of the spins: The latter can be expressed in terms of a net magnetization of a purely coherently injected spin packet M ⊥ 0 (∆t, B) (Eq. S2) with an additional phase δ 2 : with the definitions where we used sin(arctan(1/Γ)) = 1/ √ Γ 2 + 1 and cos(arctan(1/Γ)) = Γ/ √ Γ 2 + 1. Note that ω L < 0 for an effective g-factor g < 0 as it is the case for GaAs. Remarkably, the amplitude A (2) of the precessing net magnetization becomes a function of the absolute magnetic field |B| because of |ω L | and Γ 2 (ω L ) (see Eq. S11). For a vanishing Schottky capacitance τ sch → 0, which yields a square-like pulsed I p (t), the summand M ⊥(2) osc vanishes due to Γ → ∞ and A (2) → 0. This limit confirms the interpretation of M ⊥(2) osc . A huge time constant τ sch → ∞ suppresses the spin injection I p (t) → 0 (Eq. S1) and results consistently in A (2) → 0 due to the prefactor (1 − exp (−∆w/τ sch )) in Eq. S17.
Finally, we consider the first summand M (1) dis (∆t, B) of Eq. S6. This can be expressed as with a complex constant c(B), which is the result of the integral in Eq. S6 and does not depend on ∆t but on ∆w. To put it more clearly, the first summand of the net magnetization during the discharge of the capacitance C s (∆t > ∆w) in Eq. S6 is a Larmor precession with frequency ω L , decay time T * 2 starting with a phase δ 1 . The superposition of the exponentially damped oscillations M ⊥(1) dis (∆t, B) and M ⊥(2) osc (∆t, B) from Eq. S16 yield a new oscillation with amplitude A(B) and phase δ(B). Regarding the exponential background (Eq. S14), the measured θ F (∆t, B) ∝ Im(M (∆t, B)) is thus equivalent to the fitting formula Eq. 1 of the main article for the considered polarized current I p (t).
If the voltage pulse repetition time T rep is shorter than the spin dephasing time T * 2 (see Fig. 4 of the main article), interference of subsequent voltage pulses has to be taken into account. In this case of resonant spin amplification, the ansatz Eq. 3 of the article has to be replaced by Eq. 4. The summation of the voltage pulses leads to resonant spin amplification and more complicated dependence of the amplitude and the phase of the oscillating net magnetization M RSA upon application of the transverse magnetic field. It is not surprising, that resonant spin amplification can also be observed, if the period of the probe T rep,probe is much larger than T * 2 , but the period of the pump T rep,pump is smaller than T * 2 and fulfills the resonance condition 2π Trep,pump ≈ ω L as shown in section S1 of the supplements. Note that the small positive shift in Faraday rotation θ F in Fig. S1 originates from the effective non-oscillating background in Eq. S14. In fact, we can derive the magnetization dynamics of the resonant spin amplification case M RSA (∆t, B)analogously to the case of a single pump pulse shown here, but have to solve for d by the condition I p (0) = I p (T rep ) = I p (T rep,pump ) in Eq. S1: The result was used for the simulations shown in Fig. 4b of the main article. : Magnetic field dependence of the background signal during pulsed spin injection. The parameter A bg , determined from least-square fits of the measured Faraday rotation curves plotted in Fig. 2a of the main text is plotted as a function of the external magnetic field Bz. The error bars include the least-squares fit errors only. The red line represents the expected dependency with the determined parameters according to Eq. S14.
S4. TIME DOMAIN REFLECTOMETRY
For probing the charging dynamics of the Schottky contact discussed in Fig. 3b of the main article, we added a broadband 50 % power splitter to the otherwise unchanged setup (Fig. S3a) and recorded the voltage (U ref ) backreflected from the sample together with a part of the voltage applied to the sample U in by a fast sampling scope. The observed total voltage U tot = U in + U ref (Fig. S3b) reveals the evolution of the voltage at the sample starting at t = 0 s. As long as the voltage pulse is applied (total duration ∆w = 264 ns) the Schottky capacity charges up till the reflected voltage saturates. Its saturation value would correspond to −0.9 V= U amp , if the parallel resistance R S to the Schottky capacitance was zero (open termination). The intentional drop of the absolute voltage at t = 0 can be understood by charging up the fully uncharged Schottky capacitance. Note that a total voltage drop to zero is expected, if the parallel resistance R S to the Schottky capacitance is zero (shorted termination). The discharging of the Schottky capacitance starting at t = 264 ns results in the reversed dynamics. In the inset of Fig. S3b, we compare time-domain reflectometry (here we used a pulse of U amp = −1 V and ∆w = 66 ns) applied to the sample and to a broadband 50 Ohm impedance replacing the sample. In the latter case only the U in part of U tot is measured as expected. | 2021-11-18T02:15:54.982Z | 2021-11-17T00:00:00.000 | {
"year": 2021,
"sha1": "2c32114baa6c52cc0e6e90a5cd71c6716cb10e88",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2111.09242",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2c32114baa6c52cc0e6e90a5cd71c6716cb10e88",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
271001423 | pes2o/s2orc | v3-fos-license | 3D-CT reconstruction for pedicle outer width assessment in patients with thoracolumbar spine fractures: a comparative analysis between age groups <60 years and ≥60 years
Objective This study aims to compare the utilization of 3D-CT reconstruction in measuring pedicle outer width (POW) between younger/middle-aged patients (<60 years) and older patients (≥60 years) with thoracolumbar spine fractures (TSF). Methods We conducted a retrospective study from January 2021 to December 2022, involving a total of 108 patients with TSF. The study population consisted of 62 patients aged ≥60 years (observation group) and 46 patients aged <60 years (control group). We compared the POW on both the right and left sides of the thoracolumbar spine between the two groups. Additionally, we analyzed the POW by gender within each group and calculated the incidence of patients falling below the critical values for arch root puncture (5 mm) and arch root nailing (7 mm) in both groups. Results There were no statistically significant differences observed in the POW between the two groups on both the left and right sides of each corresponding vertebra (P > 0.05). In the observation group, both male and female patients had significantly smaller POW compared to the control group (P < 0.05). However, no significant difference in POW was observed between the same-sex groups in the L4 to L5 vertebrae (P > 0.05). In the observation group, the POW was less than 5 mm in 9.33% (81/868) of cases and less than 7 mm in 49.88% (433/868) of cases, primarily observed from T11 to L3. In the control group, 4.81% (31/644) of cases had a POW of less than 5 mm, and 13.81% (88/644) had a POW of less than 7 mm. Conclusion Utilizing preoperative 3D-CT reconstruction to measure POW in patients with TSF not only facilitates the assessment of surgical feasibility but also aids in surgical pathway planning, thus potentially reducing the incidence of postoperative complications.
Introduction
Thoracolumbar spine fractures (TSF) are primarily caused by osteoporosis in elderly patients, often triggered by minor trauma.The severity of the disease can be exacerbated by significantly reduced bone strength and disrupted bone balance (1).With the society undergoing progressive aging, there has been a notable increase in the number of elderly patients seeking medical treatment.Surgical intervention currently remains the primary approach, with percutaneous kyphoplasty (PKP) being a commonly utilized procedure in clinical practice.PKP is renowned for its minimally invasive nature, effective pain relief, and ability to restore vertebral height, thereby serving as the cornerstone of surgical management for TSF (2,3).However, the occurrence of postoperative complications, including pedicle wall fractures, spinal cord compression, and nerve root injuries, closely relates to the anatomical characteristics of the pedicle.Therefore, accurate measurement of pedicle morphology and dimensions becomes crucial (4).
This retrospective analysis comprises 108 TSF patients (T11 to L5) and aims to compare the changes and characteristics of POW measurements in two distinct age groups (age <60 years and ≥60 years), providing valuable insights for clinical surgical practice.
Study setting and subjects
This retrospective study utilized electronic medical records (EMR) from Nanjing Hospital of Traditional Chinese Medicine Affiliated to Nanjing University of Chinese Medicine to collect patient data who were received treatment between January 2021 and December 2022.Demographics data (i.e., age and sex), course of disease records, prescription drug dispensation records, bone mineral density (BMD) data, and fracture site records were captured.
The inclusion criteria were as follows: (1) age ≥ 18 years; (2) confirmed diagnosis of TSF, including osteoporotic vertebral compression fractures (OVCF) caused by minor trauma; (3) no history of spinal fractures before TSF; (4) a definite history of trauma.The exclusion criteria were as follows: (1) patients with vertebral tumors or tuberculosis; (2) patients with infectious diseases, coagulation disorders, or spinal cord nerve injuries; (3) patients with vertebral pedicle fractures or dislocations that hindered the measurement of POW; and (4) patients with poor adherence or who discontinued follow-up.
The study followed the Declaration of Helsinki (revised in 2013) and was approved by the ethics committee of Nanjing Hospital of Traditional Chinese Medicine Affiliated to Nanjing University of Chinese Medicine.All patients included in this study provided informed consent for the surgical protocol.
POW measurement
The POW measurements of thoracolumbar spine (T11 to L5) were measured by Revolution 256-row CT machine (General Electric, USA) with a dose of 120 kV and 250 mA.The acquired images were transferred to the ADW4.6 workstation for processing and storage.The images had a layer thickness and layer spacing of 0.625 mm, a window width of 1,300 Hu, a window position of 400 Hu, and a distance accuracy of 0.1 mm.Surface-masked images of T11 to L5 were generated using techniques such as stage limitation and regional clipping.Reconstruction parameters were adjusted, while the soft tissues surrounding the vertebral body were shielded, resulting in the acquisition of multidimensional images (Figure 1A).The center of the shortest distance from the top and bottom walls of the pedicles was selected to be O, and the axis of the pedicle was drawn as P (Figure 1B).The POW was defined as the distance between the medial and lateral bone cortex at the narrowest point of the pedicle, passing through P and parallel to the crosssectional image of the upper endplate (Figure 1C).
Outcome indicators
We measured the POW of the thoracolumbar spine (T11 to L5) on the left and right sides of the corresponding vertebrae in the patients.Subsequently, we conducted a comparison of the POW measurements between the two groups.Furthermore, we analyzed and compared the POW measurements of the thoracolumbar spine between the two groups across different genders.To determine the incidence of patients falling below the threshold values for pedicle impingement (POW < 5 mm) and pedicle implantation (POW < 7 mm), we referenced the threshold values used in both domestic and international clinical settings and calculated the measurements accordingly in the two groups.
Statistical methods
All analyses were conducted with SPSS (version 24.0, IBM, Inc., New York, USA).Continuous variables were calculated using a t-test and presented as the mean ± standard deviation (mean ± SD).Categorical variables were calculated using a chi-square test and presented as frequencies (%).P < 0.05 was considered significant statistically.
Baseline characteristics
A total of 108 patients meets the inclusion and exclusion criteria were included in this study.The observation group consisted of 62 elderly patients (age ≥60 years).Among these, 48 patients had a single vertebral compression fracture, and 14 patients had two or more fractures.The control group consisted of 46 young and middle-aged patients (age <60 years).Among these, 38 patients had a single vertebral fracture, while 8 patients had two or more fractures.The baseline characteristics of patients see Table 1.
Comparison of POW between the left and right sides of each corresponding vertebra
There was no statistically significant difference (P > 0.05) in POW measurements between the left and right sides of each corresponding vertebra (T11 to L5) within both groups (Table 2).Therefore, the average of the POW measurements from the left and right sides of each corresponding vertebra was calculated and used as the POW value for the respective pedicle.
Comparison of POW between the two groups
As shown in Table 3, in the observation group, the POW measurements of each corresponding vertebra from T11 to L3 were found to be smaller compared to those in the control group (P < 0.05).However, there was no statistically significant difference in POW measurements of L4 to L5 between the two groups (P > 0.05).
Comparison of POW between the genders
In both the observation and control groups, the POW measurements of male patients from T 11 to L 3 were found to be greater than those of female patients (P < 0.05).However, there was no significant difference in the POW of L 4 to L 5 between males and females in two groups (P > 0.05).In the comparison of POW within the same gender between the two groups, the POW measurements in each corresponding vertebra of the T12 to L3 were smaller in the observation group compared to the control group (P < 0.05).However, there was no significant difference in the POW of L4 to L5 within the same gender between the two groups.(P > 0.05).See Table 4.
Occurrence of below the threshold for pedicle puncture and nail placement
In the observation group, a total of 868 pedicles were measured, and 9.33% (81/868) of them had a POW measurement below the critical value for pedicle puncture (<5 mm).The POW below the critical value for pedicle implantation (<7 mm) accounting for 49.88% (433/868), with the majority of these measurements observed from T11 to L3.In the control group, a total of 644 pedicles were measured.Among them, 4.81% (31/644) had a POW measurement below 5 mm and 13.66% (88/644) had a POW measurement below 7 mm.These measurements were primarily distributed from T11 to L3. See Table 5.
Discussion
TSF is a prevalent type of fracture observed in clinical spinal surgery, particularly among the elderly population (5, 6).It is often attributed to factors such as gastrointestinal dysfunction, impaired absorption of calcium, decreased bone formation, mineralization capacity, and reduced BMD.With the bone trabeculae becoming less dense and the bones becoming more brittle, TSF can occur even in the absence of apparent causal factors or with minimal external force exerted (7,8).
The diameter of the vertebral pedicle gradually widens with age within certain age brackets, indicating a continuous alteration.Specifically, it widens progressively in adulthood, with females ceasing to show increases after the age of 50 and males after 60, thereafter exhibiting a diminishing trend (9,10,11).Our study findings revealed that older patients with TSF had smaller vertebral POW measurements compared to young and middleaged individuals, specifically in the range from T11 to L3 (P < 0.05).In addition, our investigation revealed a gender disparity in the POW measurements of the thoracolumbar vertebrae (T11 to L3) within the same cohort.Specifically, males have exhibited larger POW measurements in contrast to females.Notably, there was a male-to-female ratio of 9:22 among elderly patients, indicating that female patients were more susceptible to TSF.The strength of the lumbar extensor muscles decreases with age, this gradual weakening contributes to the development of stress changes in the spine, particularly affecting the vulnerability of the anterior spine to osteoporotic vertebral compression fractures.In addition, the age-related decline in the strength of the lumbar extensors leads to alterations in spinal stress distribution, further results in increased pressure on the anterior column of the spine, increased angle of thoracic kyphosis, decreased angle of lumbar lordosis, and a shift in the body's center of gravity.Consequently, these changes contribute to remodeling of the vertebral arches.It is noteworthy that while the T11 and T12 vertebrae are still connected to the ribs, they do not significantly contribute to the formation of the thoracic contour.Therefore, the stress concentration in the spinal region shifts from the thoracic to the lumbar anterior convexity.As a result, TSF most commonly occurs between the T11 vertebra and the L3 vertebra, with a particularly high prevalence at the L1 and L2 vertebrae.Furthermore, the significant hormonal changes that occur in elderly female patients after menopause make them more susceptible to osteoporosis, increasing their risk of fractures.
Currently, surgical treatment remains the preferred approach for achieving efficient recovery in patients with TSF.In particular, the pedicle plays an indispensable role in PKP, which is a commonly employed surgical procedure for treating TSF (12).The assessment of pedicle parameters, particularly POW, is crucial for the successful execution of surgical procedures.The reduction in POW significantly impacts intraoperative vertebral pedicle puncture procedures.A POW of less than 5 mm indicates a narrow vertebral pedicle, making it unsuitable for using standard-sized puncture catheters (13).As POW decreases, there arises a necessity to adjust the catheter diameter.Therefore, preoperative POW measurements can provide direct evidence for selecting the appropriate puncture catheter during the procedure.In addition, in patients presenting with severe spinal instability, spinal cord injury, spinal tumors, and similar conditions, vertebral pedicle screw insertion procedures are warranted (14).A diminutive POW may exacerbate the difficulty of screw insertion, potentially leading to complications such as fractures of the inner and outer walls of the pedicle (14).Therefore, when performing vertebral pedicle screw insertion procedures, it is imperative to calculate the appropriate critical value for pedicle screw placement based on preoperative POW measurements, aiming to mitigate postoperative complications.POW, as one of the crucial parameters, serves as a valuable tool for clinicians to discern the anatomical characteristics of the vertebral pedicle (15).The critical values for pedicle puncture (POW < 5 mm) and pedicle nail placement (POW < 7 mm) have been established (16).When the POW falls below these critical values, it is not advisable to utilize conventional puncture instruments for the operation.Therefore, precise determination of the POW value is essential for procedural success.
Conclusion
In this study, we observed that the percentage of patients with POW measurements below the critical value for pedicle puncture (5 mm) and the critical value for pedicle nail placement (7 mm) in the observation group was higher than that of control group.In addition, in the observation group and the control group, the percentage of females with POW below 5 mm was higher than the males in the same group.The percentage of females with POW measurements below 7 mm was higher than the males in the same group.These findings indicate the importance of exercising additional caution when performing pedicle puncture, particularly in females, especially when the fracture involves the vertebral levels ranging from the T12 to L2 vertebrae.
FIGURE 1 (
FIGURE 1 (A) A multidimensional image from T11 to L5, highlighting the significant wedge-shaped flattening of the L2 vertebral body.(B) A lateral image identifying the position of the pedicle axis (P).(C) A cross-sectional view used to measure the POW value.
TABLE 1
Baseline characteristics of patients.
TABLE 2
Comparison of POW between the right and left sides of each corresponding vertebra (mean ± SD, mm).
TABLE 3
Comparison of POW between the two groups (mean ± SD, mm).
TABLE 4
Comparison of POW between the genders (mean ± SD, mm).
Compared with the female in observation group.*P < 0.05; Compared with the female in control group, **P < 0.05.
TABLE 5
Occurrence of POW below the threshold for pedicle puncture and nail placement. | 2024-07-07T15:46:16.858Z | 2024-07-03T00:00:00.000 | {
"year": 2024,
"sha1": "821eb5f3e7e89c1bee5ffd43098075d2d0d2ddbb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fsurg.2024.1407484",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b87f86f9f8ebf9ea1b5b9df0b3c58ee4cf7b890",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
4491399 | pes2o/s2orc | v3-fos-license | Slow fusion pore expansion creates a unique reaction chamber for co-packaged cargo
A lumenal secretory granule protein can slow fusion pore dilation and thus its own discharge. Bohannon et al. demonstrate another outcome: the creation of a nanoscale chemical reaction chamber for granule contents in which the pH is suddenly neutralized upon fusion.
I n t R o d u c t I o n
Upon fusion of the secretory granule with the plasma membrane, lumenal constituents are discharged at very different rates. This is explained in some cases by molecular size. For example, a low molecular weight neurotransmitter such as epinephrine is usually discharged in fewer than 100 ms, whereas co-stored proteins can be released over many seconds. Specific proteins can be discharged at widely different rates independently of cell type. GFP-tagged neuropeptide Y (NPY) and tissue plasminogen activator (tPA) have contrasting behaviors. NPY usually discharges within several hundred milliseconds of fusion, whereas tPA discharges after many seconds in primary chromaffin cells (Perrais et al., 2004), PC12 cells (Taraska et al., 2003), and insulin-secreting cells (Tsuboi et al., 2004). This large difference is unlikely to reflect simply a difference in the molecular weights of the proteins (tPA-GFP, ∼100 kD; NPY-GFP, ∼40 kD). Indeed, there is another explanation. By measuring the orientation of a fluorescent probe within the plasma membrane with polarized total internal reflection fluorescence (pTIRF) microscopy, we found that more than two-thirds of the fusion events of tPA-cerulean-containing granules maintain curvature for greater than 10 s (Weiss et al., 2014a). The maintained curvature reflects a narrow fusion pore. This conclusion is consistent with the finding using a fluorescent cytosolic probe that tPA-containing granules maintain long-lived, volume-enclosing structures on the surface of PC12 cells (Taraska et al., 2003). Such events are uncommon upon fusion of fluorescent-labeled NPY-containing granules. Indeed, pTIRF microscopy (Anantharam et al., 2010a;Weiss et al., 2014a) and real-time imaging of invaginations on the cell surface (Chiang et al., 2014) reveal that curvature changes and volume-filling omega figures resulting from fusion of NPY-containing granules have a much shorter duration, often no longer than several hundred milliseconds. tPA initiates an autocrine/paracrine pathway through its proteolytic enzymatic activity that locally regulates subsequent exocytosis within the adrenal medulla (Parmer et al., 1997(Parmer et al., , 2000. Thus, the slow postfusion discharge of tPA at the cell surface likely influences the kinetics of the pathway.
The ability of tPA to almost freeze the fusion pore may have effects in addition to slowing its own release. Our experiments explore the notion that the inhibition of fusion pore expansion creates a novel compartment on the cell surface in which undiluted lumenal proteins are suddenly exposed to a pH shift from 5.5 to 7.4. We explore the implications of this concept in the context of the biochemistry of tPA.
tPA is best known as a circulating serine protease that converts plasminogen into plasmin, which in turn breaks down fibrin clots by proteolysis. The activity of tPA in the plasma is regulated by plasminogen activator inhibitor 1 (PAI), a protein that acts as a suicide substrate to covalently inhibit the proteolytic activity of tPA. These proteins are clinically important. Recombinant tPA is used intravenously to treat stroke (Fugate and Rabinstein, 2014), and dysregulation of tPA and PAI secretion is associated with thrombophilia (Sartori et al., 2003), hyperfibrinolysis (Ladenvall et al., 2000), obesity (Dietrich et al., 2016), and angiogenesis. tPA is expressed in many tissues including vascular endothelial cells (Loscalzo and Braunwald, 1988), adrenal chromaffin cells (Parmer et al., 1997), posterior pituitary nerve terminals (Miyata et al., 2005), and central nervous system (hypothalamic) neurons (Salles and Strickland, 2002).
PAI and tPA are expressed in the adrenal medulla. Both colocalize with large dense-core catecholamine-containing chromaffin granules in sucrose density gradients (Parmer et al., 1997;Jiang et al., 2011). Both are co-secreted with catecholamine upon stimulation with a nicotinic agonist or elevated K + . We had previously found by immunocytochemistry that tPA is readily detected in chromaffin granules in ∼20% of primary cultured chromaffin cells (Weiss et al., 2014b). In the present study, we show that PAI is expressed in a much larger fraction of chromaffin cells and that in tPA-expressing cells, PAI is colocalized in granules with tPA. We demonstrate that the low intragranular pH (pH 5.5) protects tPA from inactivation by co-stored PAI and investigate PAI discharge and the effects of PAI on both fusion pore dynamics and the discharge rate of coexpressed fluorescently labeled tPA. The results lead us to propose the formation of a nanoscale reaction chamber created by the fused granule and the long-lived narrow fusion pore that prevents rapid release of proteins but permits a sudden increase in lumenal pH. This compartment likely regulates the amount of enzymatically active tPA released extracellularly and creates a new molecular entity, the tPA/PAI complex, which may itself have a physiological function.
Molecular biology
Constructs used to express human tPA, tPA-pHluorin (tPA-pHl), tPA-S513A, tPA-S513A-pHl, PAI, and PAI-pHl were created using synthetic gBlocks (Integrated DNA Technologies) as described in Supplemental Methods. For consensus sequences, accession no. NM_000930.4 was used for human tPA and accession no. NM_000602.4 for human PAI. NPY-pHl was a gift of W. Almers (Vollum Institute, Oregon Health and Science University, Portland, OR).
Chromaffin cell transfection
Primary bovine adrenal medullary chromaffin cells were isolated as previously described (Wick et al., 1993), plated on 35-mm glass-bottom dishes (refractive index 1.51; World Precision Instruments), treated with poly-d-lysine, and layered with bovine collagen. Chromaffin cells were transfected with the Neon Transfection System (Invitrogen). Cells were electroporated in Invitrogen's proprietary Solution R or in a homemade resuspension buffer (250 mM sucrose and 1 mM MgCl 2 in Dulbecco's PBS; Brees and Fransen, 2014). 10 6 cells and up to 10 µg DNA in 100 µl total resuspension buffer were electroporated with pulse settings of 1,100 mV and 40 ms. In cases where an unlabeled construct was transfected along with a fluorescently labeled construct, the unlabeled construct was added in excess, at a ratio of 2:1.
Immunocytochemistry
Chromaffin cells were stained and analyzed as described in detail in the figure legends. Images were acquired on an Olympus Fluoview 500 confocal microscope with a 60× 1.42-NA oil objective. An argon 488-nm laser with a 505-to 525-nm bandpass filter, a HeNe green 543-nm laser with a 560-to 600-nm bandpass filter, and a HeNe red (633-nm) laser with a longpass filter were used. To minimize spillover, images with different excitations were acquired sequentially. Within an experiment, initial settings were adjusted so that the brightest pixels for each color were unsaturated, and these settings were maintained throughout. Images were analyzed with Im-ageJ (Schneider et al., 2012), and statistics were analyzed with GraphPad Prism 6.
Microscopy
Live cell experiments were conducted on an inverted Olympus IX70 microscope with a 1.49-NA objective and a specialized pTIRF excitation scheme as previously described (Weiss et al., 2014a). A 488-nm laser (Coherent OBIS or Melles Griot 543-AP-01) was used to visualize pHluorin, and a 561-nm laser polarized into P-and S-polarizations (Coherent OBIS) was used to visualize the carbocyanine dye, 1,1′-dioctadecyl-3,3,3′,3′-tetramethylindodicarbocyanine, 4-chlorobenzenesulfonate salt (DiD). The filter cube contained a dichroic mirror/emission filter combination: ZT488/561rpc and ZET488/561m for NPY-pHl/DiD or tPA-pHl/DiD (Chroma Technology). The aligned excitation beams were focused and positioned near the periphery of the back focal plane of a 60× 1.49-NA, oil immersion objective (Olympus) so that the laser beam was incident on the coverslip at ∼70° from the normal giving a decay constant for the evanescent field of ∼100 nm. The galva-nometer mirrors were computer controlled through a DAQ board, NI PCIE-6351 (National Instruments) and a custom LabVIEW program.
The system was programmed to step through a sequence of three shutter openings (one at a time for each beam), repeating the cycle without additional delay using a through-the-lens (TTL) triggering system (sequence frequency, 8 Hz). Emission images (with the 1.5× internal magnifying lens in the emission path) were acquired by a cooled EM-CCD camera (iXon, 512 × 512 pixels; Andor Technology). Camera control and serial image acquisition was managed by Solis (Andor Technology). Sequential NPY-pHl or tPA-pHl and DiD emission images (the latter excited sequentially by Sand P-polarized 561-nm laser beams and denoted S and P, respectively) were captured. Normalized P/S ratios and P+2S sums were calculated pixel by pixel for each image, and the transformations were aligned to the pHluorin images using custom software written in IDL.
For pHl secretion experiments without DiD, images were acquired at a rate of 36 Hz on an iXon EMC CD camera (Andor Technologies). For pTIRF secretion experiments, shuttering was used to sequentially image pHl, P-polarized DiD, and S-polarized DiD at a rate of 8.5 Hz. All experiments were performed in a 34°C room to approximate physiological temperatures.
Perfusion
All experiments were performed 3-5 d posttransfection in a room heated to 34 ± 1°C. Previous experiments from the laboratory were performed at 27°C (Weiss et al., 2014a,b); the higher temperature was used to better approximate the normal physiological temperature of the granules. Individual cells were perfused through a pipette (100-µm inner diameter) using positive pressure from a computer-controlled perfusion system DAD-6VM (ALA Scientific Instruments). Cells were maintained in a calcium physiological saline solution (CaPSS) plus glucose (145 mM NaCl, 5.6 mM KCl, 2.2 mM CaCl 2 , 0.5 mM MgCl 2 , 5.6 mM glucose, and 15 mM Hepes, pH 7.4). Other solutions used during perfusion were elevated potassium PSS (KPSS; 95 mM NaCl, 56 mM KCl, 2.2 mM CaCl 2 , 0.5 mM MgCl 2 , 5.6 mM glucose, and 15 mM Hepes, pH 7.4) and low pH MES buffer (145 mM NaCl, 5.6 mM KCl, 2.2 mM CaCl 2 , 0.5 mM MgCl 2 , 5.6 mM glucose, and 15 mM MES, pH 5.5). Generally, cells were treated according to the following schedule during perfusion: 3 s CaPSS, 3 s MES, 3 s CaPSS, 45 s KPSS, 5 s MES, and 10 s CaPSS. Before pTIRF imaging experiments, a saturated solution of DiD in ethanol was added to cells at a concentration of 1:500 and immediately rinsed three times with PSS.
Analysis of event duration
pHluorin-labeled protein discharge was measured over time for a small region of interest (∼0.7-µm diameter) centered on the event using Time Series Analyzer V2.0 plugin in ImageJ. Time-varying local backgrounds were determined by capturing the intensity of a neighboring region of interest without a fusion event. They were subtracted frame by frame from the intensities of the discharge events. The local background subtraction was necessary because of increases in background intensity caused by protein diffusion from nearby events.
The duration of discharge of pHluorin-labeled proteins was determined with a custom program that largely eliminated subjectivity of the analysis and greatly facilitated the interpretation of the results. In this program, the user defines the start and end times in the fluorescence-versus-time curve for each event. The value of the fluorescence at the chosen start time t start , just before fluorescence begins to rise, is considered the baseline. The end time t end is chosen to be where the fluorescence after the event has returned to its lowest value. The program then determines the time of the maximum fluorescence t max within this time window. The intervals (t start , t max ) and (t max , t end ) are defined as the rise phase and fall phase, respectively. Each of those phases is first best-fitted with a fifth-degree polynomial (for smoothing), and then a weighted average slope is calculated for each interval, upward for the rising phase and downward for the falling phase. Straight lines with those slopes are then pinned to the positions of the maximum (upward or downward) slopes of the fluorescence data and extrapolated to the baseline. The time period between the baseline intercept of the rising phase straight line and the falling phase straight line is considered to be the duration of the event. For pTIRF experiments, two IDL programs were used to evaluate membrane curvature during secretion. Together, the programs remove background fluorescence from P-and S-polarized DiD images and create stacks of pHl, P/S, and P+2S images. Regions of interest are selected and values are plotted against time. The lengths of P/S changes were measured manually and semiquantitatively. The time that P/S took to return to the prefusion baseline was determined to be less than 1 s, 1-10 s, or longer than 10 s.
Online supplemental material
Supplemental Methods contains sequences of synthetic gBlocks, quantification of immunocytochemistry in transfected cells, and cartoons explaining the pTIR FM method. Fig. S1 shows that tPA has no effect on the mean PAI immunoreactivity per punctum. Fig. S2 shows coexpression of untagged tPA with PAI-pHL or NPY-pHL. Fig. S3 shows the extent of PAI overexpression: transfected vs endogenous PAI. Fig. S4 illustrates the P/S response after secretory granule fusion in pTIR FM. Fig. S5 illustrates the P+2S response after secretory granule fusion in pTIRF.
PAI colocalizes with endogenous tPA in secretory granules
We previously found that tPA is strongly expressed in ∼20% of chromaffin cells in culture, where it has a distinct punctate appearance indicative of secretory granules (Weiss et al., 2014b). Jiang et al. (2011) used immunogold labeling and electron microscopy to detect PAI in dense core granules in PC-12 cells and isolated bovine chromaffin granules. The percentage of chromaffin cells expressing PAI was not determined.
We thus asked whether PAI was present in those chromaffin cells that express tPA, and if so, whether PAI localized to tPA-containing secretory granules or to another granule population. Cultured bovine adrenal chromaffin cells were fixed, permeabilized, and incubated with antibodies to PAI and tPA, followed by secondary antibodies conjugated with fluorescent dyes, and then imaged by confocal microscopy ( Fig. 1; see Materials and methods and figure legends for details).
In contrast to immunoreactive tPA ( Fig. 1, A and C), PAI immunoreactive cells were widely distributed throughout the cultures, and PAI puncta were abundant throughout the cells (Fig. 1, B and D). Not only was punctate PAI found in the subset of cells strongly expressing tPA, but PAI puncta also colocalized with tPA puncta (examples indicated by yellow arrowheads Figure 1. PAI colocalizes with endogenous tPA in secretory granules. (A, C, and E) Cultured bovine chromaffin cells were fixed with 4% paraformaldehyde, permeabilized with methanol, and incubated with a primary antibody to tPA (rabbit anti-mouse tPA; Molecular Innovations), followed by Alexa Fluor 488-labeled goat anti-rabbit Fab fragments (Jackson ImmunoResearch Laboratories). Fab fragments rather than bivalent antibodies were used to preclude capture of a second rabbit primary antibody in a subsequent labeling step. After rinsing, the cells were blocked with an excess of unlabeled goat antirabbit Fab fragments (Jackson ImmunoResearch Laboratories), to ensure that none of the first primary ab (rabbit anti-tPA) would be accessible to a second anti-rabbit secondary antibody. (B, D, and F) Cells were next incubated with (B, D) or without (F) rabbit anti-human PAI (Abcam), followed by an Alexa Fluor 546-labeled anti-rabbit secondary antibody (B, D, and F; Molecular Probes). Cells were imaged by confocal microscopy. Images to be compared directly (e.g., A and E; B and F) were acquired at the same microscope settings, and the brightness and contrast were adjusted identically in making the figures. The absence of immunofluorescence in F (with no second primary antibody against PAI) indicates that the first rabbit primary antibody visualized in E was completely blocked before the addition of the second primary (seen in B and D). Colocalization of PAI (B) and tPA (A) is indicated by arrowheads and at an expanded scale in D and C, respectively. Bars, 2 µm.
in Fig. 1 [C and D]). Of 788 tPA puncta analyzed in 18 cells, less than 1% (7 puncta) lacked appreciable PAI immunoreactivity, indicating that the majority of granules with endogenous tPA also contain PAI. The presence of tPA had no apparent effect on the mean PAI immunoreactivity per punctum (compare the tPA-containing cell indicated by arrowheads in Fig. 1 B to the surrounding cells without tPA). When cells with tPA (tPA fluorescence, 34,578 ± 3,867 arbitrary fluorescence units [afu]) were compared with cells without tPA (background fluorescence, 154 ± 18 afu), there was no difference in the mean PAI per punctum (30,681 ± 1,788 vs. 29,425 ± 1,880 afu, respectively; Fig. S1). We conclude that in cultured bovine cells, PAI is ubiquitously expressed in chromaffin granules, including those that contain tPA, and that the sorting of PAI to chromaffin granules is unaltered by co-storage with tPA.
Endogenous PAI colocalizes with dopamine-βhydroxylase on the cell surface after stimulation We and others have previously reported that components of the secretory granule membrane remain punctate on the cell surface for many seconds or minutes after fusion (Ceridono et al., 2011;Bittner et al., 2013). Chief among these is dopamine-β-hydroxylase (DBH), whose presence on the inner leaflet of the granule membrane is exposed to the extracellular space after fusion. We have also shown that certain cargo molecules (e.g., tPA-cer) may also be retained with DBH at the site of fusion for many seconds (Weiss et al., 2014a). Because PAI is in chromaffin granules, it may also be detectable on the extracellular surface at DBH-containing release sites. Intact chromaffin cells were stimulated for 10 s at 34°, immediately chilled on ice to prevent endocytosis, and incubated with antibodies to PAI and DBH. Confocal images of extracellular punctate DBH and PAI are shown in Fig. 2 (A and B), respectively. In each of 10 cells examined, PAI colocalized well with punctate DBH on the plasma membrane, consistent with PAI being released from chromaffin granules and indicating that PAI can remain associated with release sites for many seconds.
Endogenous PAI colocalizes with endogenous tPA on the cell surface after stimulation We found that PAI and DBH colocalized at sites of granule fusion. Because endogenous PAI is co-stored with endogenous tPA in tPA-expressing cells, we asked whether the proteins also colocalize on the cell surface after fusion. Indeed, there was a striking colocalization of tPA (Fig. 3 A) and PAI (Fig. 3 B) puncta on the plasma membrane after 10-s depolarization with 56 mM K + . In 12 cells expressing endogenous tPA, there were 333 total extracellular puncta: 30 puncta had tPA alone, 40 had PAI alone, and 263 contained both proteins. Thus, 79% of PAI and tPA puncta colocalized on the surface of stimulated cells. When the fraction of puncta with colocalized tPA and PAI was calculated for each individual cell, it ranged from 67 to 90%, with a mean of 77.2 ± 2.2%. Calculated in another manner, PAI puncta colocalized with 90% of tPA puncta on the surface of stimulated cells. As expected, neither tPA ( Fig. 3 C) nor PAI (Fig. 3 D) immunoreactivity was visible on the membrane of unstimulated cells (differential interference contrast image, Fig. 3 E).
Neutralization of secretory granules allows inhibition of tPA activity
It was surprising to find that tPA, a serine protease, is routinely co-packaged and stored in the same secretory granules as its inhibitor, PAI. What prevents PAI from irreversibly inhibiting tPA before its release? One possibility is that the acidic environment of the granule (pH 5.5) prevents the inhibition of tPA by PAI. Indeed, the inhibition is strongly reduced at acid pH in vitro (Komissarov et al., 2004). If that is the case, then neutralization of the granule interior might allow the inhibition to occur. We examined whether raising the pH in intracellular chromaffin granules by incubating cells in a physiological saline solution with the weak base NH 4 Cl (25 mM; Holz et al., 1983) leads to a de- Figure 2. endogenous PAI colocalizes with dBh on the cell surface after stimulation. Cultured bovine chromaffin cells were stimulated for 10 s with 56 mM K + at 34°C. The solution was replaced with buffer containing 5.6 mM K + , and the cells were immediately placed on ice. Cells were then incubated with antibodies to PAI (B) and to the lumenal domain of the granule membrane protein DBH (A) for 60 min on ice, and then processed and imaged by confocal microscopy. Because the cells were not permeabilized, only antigens present on the surface of the cells are visible. Arrowheads indicate instances of colocalization of secreted DBH and PAI. n = 10 cells. Bar, 2 µm. crease in tPA activity (Fig. 4). tPA activity was measured by separating cell lysates on a gel (zymogram) polymerized in the presence of two substrates, plasminogen and casein. Active tPA cleaves the plasminogen to plasmin, which then hydrolyzes casein, leaving a clear band in a Coomassie-stained gel. In Fig. 4 A, triplicate samples (shown in inverted grayscale) were scanned and quantified (Fig. 4 B). A 90-min incubation with 25 mM NH 4 Cl reduced the mean tPA activity by 47%. These findings indicate that endogenous tPA is co-stored and co-secreted with its inhibitor PAI and is protected from inactivation by the low intragranular pH.
Next we explored with transfected, labeled proteins the effects of their co-storage on the dynamics of postfusion discharge, fusion pore expansion, and the implications of the postfusion rise in pH on inactivation of tPA.
Comparison of the postfusion discharge of NPY, tPA, and PAI labeled with pHluorin Secretory granule fusion and discharge of lumenal proteins were detected using proteins fused to the highly pH-sensitive GFP variant, ecliptic pHluorin (pHl; Miesenböck and Rothman, 1997). There was a rapid increase in fluorescence at individual fusion sites for all three proteins because of the rapid rise of pH upon fusion. However, the subsequent kinetics of the decay of fluorescence varied with the different proteins. NPY-pHl was usually discharged rapidly from the fused granule. At an acquisition rate of 36 Hz, many discharge events occurred over two to three frames (56-83 ms), and the majority of NPY events were completed in fewer than six frames (166 ms; Fig. 5 B). In contrast, the discharge of tPA-pHl was orders of magnitude slower, occurring over tens of seconds (Fig. 5 C). These results are consistent with previously reported observations with GFP or cerulean-labeled NPY and tPA (Taraska et al., 2003;Perrais et al., 2004;Tsuboi et al., 2004;Weiss et al., 2014a).
The discharge of PAI-pHl usually displayed a biphasic release pattern (Fig. 5 D), which was observed with neither NPY-pHl nor tPA-pHl. After the increase of fluorescence upon neutralization of the secretory granule lumen, there was a rapid loss of PAI-pHl fluorescence Figure 3. endogenous PAI colocalizes with tPA on the cell surface after stimulation. Cultured bovine chromaffin cells were incubated for 10 s in buffer with (A and B) or without (C-E) 56 mM K + at 34°C. The solution was replaced with buffer containing 5.6 mM K + , and the cells were immediately placed on ice. Cells were then incubated with antibodies to tPA (A and C) and PAI (B and D) for 60 min on ice, and then processed and imaged by confocal microscopy. (A and B) Arrowheads indicate instances of colocalization of secreted tPA and PAI. When the fraction of puncta with colocalized tPA and PAI was calculated for n = 12 cells, it ranged from 67 to 90%, with a mean of 77.2 ± 2.2%. n = 333 total puncta. (C-E) Unstimulated cells, which were processed for tPA and PAI and visualized as in A and B, have little or no secreted tPA or PAI on the plasma membrane. Images that are to be compared directly (e.g., A and C; B and D) were acquired at the same microscope settings and adjusted to the same brightness and contrast when making the figures. Bars, 2 µm. similar to that seen with NPY-pHl granules. However, unlike NPY-pHl or tPA-pHl, PAI-pHl fluorescence often did not smoothly decline. Instead, fluorescence rapidly decreased to approximately half of the maximal fluorescence, and then was either stable or slowly declined. The fluorescence of the plateau phase completely (and reversibly) disappeared upon perfusion with pH 5.5 buffered solution, indicating that the protein was retained on the cell surface. As shownlater (Fig. 7), the curvature changes associated with the fusion event coincide with the initial rapid phasic intensity increase and not with the plateau. Thus the plateau likely reflects the presence of PAI-pHl on the plasma membrane after the fusion pore expansion.
To quantify the duration of the postfusion discharge for the different proteins, software was developed (see Materials and Methods) that assigns a duration to individual events. The program largely eliminated subjectivity from the analysis and greatly facilitated the interpretation of the results. Examples of the analysis are shown in Fig. 5 (red and blue). In the ∼60% of the PAI-pHl events with a rapid phasic increase followed by a plateau of fluorescence, the event duration was calculated from the fluorescence changes preceding the plateau (Fig. 5 D).
The analysis confirmed that the discharge of NPY-pHl (control) was much faster than that of tPA-pHl (control), with median durations of ∼0.1 and 10 s, respectively (Fig. 6, C and E). The discharge of PAI-pHl (control) was intermediate between the other two proteins, with a median duration of ∼0.5 s (Fig. 6 A and see Fig. 9).
Expression of tPA in secretory granules slows the postfusion discharge of colocalized PAI or NPY Immunocytochemistry (Figs. 1, 2, and 3) indicated that endogenous PAI and tPA can be packaged within the same secretory granule. tPA greatly slows the expansion of the fusion pore and, in addition, may covalently bind PAI after fusion when the granule lumen is neutralized. We therefore predicted that PAI discharge after fusion would be retarded from granules costoring tPA. Chromaffin cells were cotransfected with plasmids encoding PAI-pHl and unlabeled tPA. Immunocytochemistry revealed that ∼80% of the PAI-pHl-labeled granules coexpressed tPA (Fig. S2). There was a 1.87-fold increase in immunoreactive PAI in cells transfected with tPA-pHl and untagged PAI compared with nontransfected cells (Fig. S3). PAI-pHl cotransfected with a plasmid encoding tPA was discharged with a fivefold greater median duration than PAI-pHl cotransfected with a control plasmid (pcDNA3; Fig. 6, A and B). Coexpression of unlabeled PAI did not alter the discharge of tPA-pHl (Fig. 6, E and F).
To determine whether the ability of tPA to slow the discharge was specific for PAI-pHl, the effect of transfected tPA on the discharge of NPY-pHl was investigated. Immunocytochemistry revealed that ∼80% of the NPY-pHl-labeled granules coexpressed exogenous tPA (Fig. S2). Cotransfected tPA caused a twofold increase in the median duration of NPY-pHl events (Fig. 6, C and D). Although the effect of tPA on the discharge of NPY-pHl was less than that on the discharge of PAI-pHl, the results indicate that the ability of tPA to slow the discharge of another lumenal protein is not specific for PAI. tPA causes a prolonged fusion pore neck in the presence of PAI and NPY Fusion of the secretory granule membrane with the plasma membrane is accompanied by a sudden increase in the local curvature of the plasma membrane at the fusion site (Anantharam et al., 2010b(Anantharam et al., , 2011. Curvature at the fusion junction can be detected by a combination of polarization and TIRF of an oriented membrane fluorophore (a carbocyanine dye, e.g., DiD), which incorporates into the plasma membrane bilayer with its preferred polarization of light absorption and emission parallel to the local plane of the membrane (Axelrod, Figure 4. neutralization of secretory granules allows inhibition of tPA activity. (A) Bovine chromaffin cells were incubated for 90 min in a physiological saline solution with or without 25 mM NH 4 Cl at 34°C to neutralize secretory granule pH. Cell lysates were resolved on a 10% SDS polyacrylamide gel containing casein (1 mg/ml) and plasminogen (10 µg/ml). SDS was removed by four washes in 2.5% Triton X-100 to allow renaturation of tPA. Gels (zymograms) were incubated in 100 mM Tris, pH 8.1, for 4 h at 37°C and then stained with Coomassie blue stain to visualize casein hydrolysis (inverted grayscale, in triplicate). (B) Gels were scanned, and band intensities were quantified in ImageJ. Mean ± SEM is shown. Student's t test resulted in a p-value of 0.0027. Fig. S4). TIRF microscopy relies on the two possible orthogonal electric field polarizations of an evanescent field: one predominantly along the z axis (optical axis perpendicular to the coverslip, P-polarized), and the other in the plane of the coverslip (S-polarized). P-polarized light excites only membrane DiD with an absorption dipole component that is perpendicular to the coverslip, whereas S-polarized excites only DiD that has an absorption dipole component parallel to the coverslip. The key curvature measurement is an increase in the ratio of the emission with P-polarized excitation to the emission with S-polarized excitation (termed P/S). We earlier demonstrated that the curvature changes associated with the fusion of tPA-containing granules had a many-fold longer duration than those of NPY-containing granules (Weiss et al., 2014a). The slow discharge of tPA was associated with long duration curvature changes. We wanted to determine whether the prolonged fusion pore associated with fusion of a tPA-containing granule also occurred when tPA was coexpressed with PAI-pHl or NPY-pHl.
1979;
When PAI-pHl was transfected into cells without tPA, increases in membrane curvature (as determined by an increase in P/S) frequently matched the duration of the initial spike in PAI-pHl fluorescence (Fig. 7, A and B). Curvature changes and thus the fusion pore were not associated with the fluorescence plateau of the fusion event. Because the plateau was quenched by low pH, the plateau likely reflects deposition and local binding of PAI-pHl at the surface of the cell after the granule membrane flattens into the plasma membrane.
Coexpression of exogenous tPA with PAI-pHl increased the duration of the P/S elevation compared Figure 5. nPY, tPA, and PAI have different secretion characteristics. Bovine chromaffin cells were transfected to express cargo proteins fused to pHl. Secretion was stimulated with 56 mM potassium buffer and observed by TIRF microscopy at a rate of 36 Hz. pHl fluorescence intensity was analyzed with a custom duration-finding program that is robust against variations in the shape of the data curve. (A) Schematic of program features. The solid thin black curve is the noisy fluorescence versus time of hypothetical data. A start time t start is chosen, at which the fluorescence is defined to be baseline. Analysis is done separately for the rising phase (red) and the falling phase (blue), defined as before or after the fluorescence maximum time t max , respectively. First, the fluorescence versus time in each phase is smoothed by fitting to a fifth-degree polynomial (thick red or blue solid lines). Next, a weighted average slope is calculated for each phase in the respective time windows (t start , t max ) and (t max , t end ). Straight lines with those slopes (shown as dotted lines) are pinned to the maximum slope points (denoted by circles) and then extrapolated to the baseline to determine the event duration. (B) NPY-pHl is secreted rapidly. Entire events frequently take less than five frames at 36 Hz (inset; each point is one frame). Although the analysis was performed on a region of interest encompassing only the largest (first) fluorescent change, fluorescence changes from nearby fusion events are also evident. (C) tPA-pHl is secreted slowly, frequently lasting many seconds. (D) PAI-pHl is secreted rapidly. Often, ∼50% of PAI-pHl fluorescence is immediately lost, over just a few frames (inset). A fraction of PAI-pHl remains on the cell surface (plateau) and is sensitive to a pH 5.5 solution applied extracellularly.
with expression of PAI-pHl alone (Fig. 7, C and D). Coexpression increased the fraction of events that had P/S durations greater than 10 s from 25% to 76% (Fig. 7 E). tPA caused a similar increase in duration of curvature associated with the fusion pore when coexpressed with NPY-pHl (Fig. 8).
Information about the geometry of the fusion pore can be extracted from the p-TIRF measurements (Fig. S5). The linear combination of the emissions P+2S reports approximate total DiD emission as observed by a 1.49-NA objective, which in theory is proportional to the amount of DiD at any x-y-z location multiplied by the evanescent field intensity (Anantharam et al., 2010b). Computer simulations (Anantharam et al., 2010b) indicate that P+2S will increase if the geometry results in more DiD-labeled membrane close to the glass interface, as when a fused granule is attached to the plasma membrane by a short narrow neck. P+2S will decrease if DiD diffuses into a postfusion membrane indentation (placing DiD farther from the substrate and thereby in a dimmer evanescent field intensity). P+2S is not as robust a measurement as P/S because an increase, decrease, or no detectable change in P+2S is possible depending on various countervailing tendencies arising from the geometrical details of the membrane deformation. In those fusion events of granules containing tPA alone or tPA with either PAI-pHl or NPY-pHl with measurable change in P+2S, ∼90% had an increase in P+2S. Thus, it is likely that tPA with or without colocalization with other transfected proteins in secretory granules preferentially stabilizes a geometry that occurs soon after fusion, with the granule membrane connected to the plasma membrane through a short narrow neck.
Mutation of the active-site serine in tPA does not alter the retarded release of coexpressed PAI-pHl We investigated whether the ability of tPA to slow the release of co-stored PAI is caused not only by delayed fusion pore expansion, but also by the covalent attachment of PAI to tPA at the active site serine (serine 513). This reaction is likely to occur upon the increase of lumenal pH after fusion but before complete discharge of PAI. The possibility was investigated by cotransfecting PAI-pHL with either wild-type tPA or tPA (S513A; Fig. 9). The median discharge duration was increased from 0.4 s in the absence of transfected tPA to 14 s by both wild-type and mutant tPA. Both wild-type and mutant tPA slowed PAI-pHl discharge similarly. Thus, the covalent interaction of PAI with tPA does not contribute to the effect of tPA to slow PAI discharge after fusion. Figure 6. tPA slows the release of co-packaged PAI-phl and nPY-phl. Bovine chromaffin cells were cotransfected to express cargo proteins tagged with pHl in tandem with tPA, PAI, or pCDNA3 vector control. Secretion was stimulated with 56 mM potassium buffer and observed by TIRF microscopy at a rate of 36 Hz. pHl fluorescence intensity was analyzed with a custom program as described in Fig. 5. (A and B) tPA slows PAI-pHl secretion. (C and D) tPA slows NPY-pHl secretion. (E and F) PAI does not slow the secretion of tPA-pHl. Each dot represents one fusion event, and medians are indicated by lines. Cumulative histograms plot data from upper panels. A Kolmogorov-Smirnoff test was performed for A and C. One-way ANO VA was performed in E with a post hoc Kruskal-Wallis test.
d I s c u s s I o n
We previously reported that a lumenal cargo protein within a secretory granule strongly inhibits the expansion of its own fusion pore, thereby slowing its own postfusion discharge. tPA is endogenously expressed in a subpopulation of chromaffin cells and had been implicated in an autocrine/paracrine negative feedback pathway that is initiated by proteolytic activation of surface plasminogen to plasmin. By regulation of its own discharge, tPA is likely shaping the kinetics of the extracellular proteolytic pathway. Here we describe another consequence of the regulation of the fusion pore. The fused granule with a stable fusion pore causes the high concentrations of retained proteins within the granule lumen to be suddenly exposed to neutral pH. In this context, we considered the finding that tPA and its primary protein inhibitor, PAI, are both expressed in chromaffin cells and both are secreted upon stimulation (Parmer et al., 1997;Jiang et al., 2011).
Endogenous PAI and tPA are co-packaged in the same secretory granules in chromaffin cells and can be detected on the cell surface PAI is a suicide substrate that irreversibly acylates the active site in tPA at neutral pH (Lawrence et al., 1995). Previous studies demonstrated by sucrose density purification that both tPA and PAI are localized in chromaffin granules (Parmer et al., 1997;Jiang et al., 2011). In the present study, we used immunocytochemistry to examine the localization of the endogenous proteins in chromaffin cells. PAI was expressed in the majority of chromaffin cells in the cultures as small puncta, strongly suggesting that they were in chromaffin granules. Indeed, upon brief stimulation (10 s) with elevated K + , PAI that had not yet diffused into the medium was found on the cell surface in small puncta also containing membrane-bound DBH, a lumenal marker of chromaffin granules. Whereas PAI was expressed in most of the chromaffin cells, tPA is expressed in only 20% (Weiss et al., 2014b). Remarkably, tPA expression in chromaffin granules was invariably associated with Bovine chromaffin cells were cotransfected with either PAI-pHl and pcDNA3 or PAI-pHl and tPA. pTIRF microscopy was performed as described in Materials and Methods. (A and C) Changes in pHluorin fluorescence were recorded over time. (B and D) Concurrently, DiD fluorescence as excited by P-and S-polarized light was recorded. The ratio P/S, plotted against time, corresponds to localized increases in membrane curvature. Secretion start time is indicated by the dotted red line. (E) Increases in P/S are reported semiquantitatively. The length of time P/S was elevated was measured for n = 40 PAI-pHl + pcDNA3 or n = 17 PAI-pHl = tPA events and binned as shown.
coexpression of its inhibitor PAI (Fig. 1). In a previous study, we demonstrated that endogenous tPA, like PAI, appears on the cell surface as puncta colocalized with DBH after a 10-s stimulation (Weiss et al., 2014a). Not surprisingly, when cells were stimulated with elevated K + , endogenous tPA was colocalized with endogenous PAI on the cell surface in puncta (Fig. 3), indicative of co-discharge of the co-packaged proteins.
We reckoned that the low pH of chromaffin granules (pH 5.3-5.5;Holz et al., 1983) protects tPA from inactivation. Indeed, when the lumenal pH of intracellular chromaffin granules was raised by incubation of cells with NH 4 Cl, subsequent zymography of cell homogenates demonstrated that tPA activity was inhibited 50% (Fig. 4). Because the covalent interaction of tPA and PAI is robust at neutral pH, these experiments indicate that endogenous tPA is protected from covalent inhibition by PAI because of the low lumenal pH, but can be rapidly inactivated upon the rise of intralumenal pH with fusion. Thus, the punctate tPA that colocalizes with PAI on the cell surface immediately upon fusion is likely to be at least partially inhibited.
tPA creates a nanoscale chamber upon fusion that permits covalent interaction with PAI The colocalization of endogenous PAI with tPA prompted us to investigate the effect of tPA on the discharge of co-stored fluorescently labeled PAI. PAI-pHl discharge in the absence of transfected tPA often had an unusual time course. There was rapid partial release (within 1-2 s) followed by stable fluorescence for tens of seconds on the extracellular surface (Fig. 5 D). Because pTIRF microscopy detected curvature only during the initial rapid release, it is likely that the retained fluorescence reflects PAI-pHl binding to the cell surface after expansion of the fusion pore.
Co-storage with transfected tPA slowed the initial discharge of PAI-pHl at least fivefold (Fig. 6). Because the discharge of NPY-pHl was also slowed by co-storage with tPA, at least part of the retention of PAI-pHl by tPA was Figure 8. tPA slows fusion pore expansion in the presence of nPYphl. Bovine chromaffin cells were cotransfected with either NPY-pHl and pcDNA3 or NPY-pHl and tPA. pTIRF microscopy was performed as described in Materials and Methods. (A and C) Changes in pHl fluorescence were recorded over time. (B and D) Concurrently, DiD fluorescence as excited by P-and S-polarized light was recorded. The ratio P/S, plotted against time, corresponds to localized increases in membrane curvature. Secretion start time is indicated by the dotted red line. (E) Increases in P/S are reported semiquantitatively. The length of time P/S was elevated was measured for n = 22 NPY-pHl + pcDNA3 or n = 33 NPY-pHl = tPA events and binned as shown.
likely caused by the profound slowing of fusion pore expansion induced by tPA (detected by pTIRF; Fig. 7). Indeed, the ability of tPA to slow fusion pore expansion can even retard the discharge of catecholamine detected by amperometry (Weiss et al., 2014a).
Binding of PAI-pHl to the slowly discharged tPA may also contribute to the retardation of PAI-pHl discharge. We showed with tPA having its active site serine mutated to alanine that the slowing of PAI discharge did not require covalent interaction of PAI and tPA (Fig. 9). However, because PAI binds to tPA even without acylation (Olson et al., 2001), noncovalent interaction with tPA could contribute to the slow discharge kinetics.
Function of a nanoscale reaction chamber
There are at least two possible consequences of the formation of a nanoscale reaction chamber. First, the amount of enzymatically active tPA that is discharged is likely to be lessened because of inhibition by co-stored PAI immediately upon the neutralization of the granule interior upon fusion. Zymography indicated that 50% of the tPA activity was retained after the pH within intracellular secretory granules was raised by incubation with NH 4 + , suggesting that there is a molar excess of endogenous tPA over PAI in secretory granules. This finding is consistent with enzymatically active tPA being secreted from chromaffin cells upon stimulation (Parmer et al., 1997).
Is there enough time between the rise in lumenal pH upon fusion and discharge of PAI to permit the covalent inactivation of tPA by PAI? The reaction should be complete within a millisecond of pH neutralization, based on the rate constant of the interaction of the two proteins in vitro (Lawrence et al., 1990) and assuming that the endogenous proteins are as little as 1% by weight (∼30-70 µM concentrations) of the total lumenal granule protein (∼250 mg/ml; R.W. Holz, personal observations). If these concentrations of free proteins are actually present in the granule, then little unreacted PAI would be discharged, because the discharge time for total PAI-pHl is greater than 1 s for 90% of the fusions in the presence of tPA (Fig. 6, A and B). However, there is great uncertainty in this estimate, and the reaction rates could be orders of magnitude slower. The free concentrations of endogenous tPA and PAI are unknown. Indeed, tPA can be quite insoluble at neutral or acid pH (∼0.1 mg/ml; Nguyen and Ward, 1993). In addition, because the kinetics were determined in dilute and ideal reaction conditions, the reaction rate may be much less in the unusual environment of the recently fused granule. Therefore, we cannot rule out the possibility that some unreacted PAI and tPA escapes through the slowly expanding fusion pore, and thus reduces the amount of the tPA/PAI complex. Nevertheless, there seems to be more than enough time to allow the inhibition of tPA and the creation of a new secreted species.
A second consequence of the formation of a nanoscale reaction chamber after fusion is the formation of the covalent tPA/PAI complex. The complex may have physiological function. It is a high affinity ligand for the LDL receptor-related protein (LRP-1); binding to LRP-1 results in its endocytosis (Stefansson et al., 1998;Lillis et al., 2005). LRP-1 also functions as an intracellular scaffold for protein kinase signaling pathways (Lillis et al., 2005;Mantuano et al., 2013). The signaling effects of binding tPA/PAI are unknown. Because LRP-1 mRNA is expressed in human adrenal medulla (Uhlén et al., 2015), the locally released tPA/PAI complex may itself have autocrine/paracrine affects. Secreted tPA/ PAI complexes may also have systemic effects mediated through interaction with LRP-1 at distal sites (Cale and Lawrence, 2007).
In summary, our results, as well those of Chiang et al. (2014), highlight the fact that omega-figure-like structures can have durations of many seconds after fusion. The present study proposes that these structures Figure 9. A kinase-dead mutant of tPA still slows PAI-phl secretion. Bovine chromaffin cells were cotransfected to express PAI-pHl in tandem with tPA, S513A tPA, or empty vector control (pcDNA3). Secretion was stimulated with 56 mM potassium buffer and observed by TIRF microscopy at a rate of 36 Hz. pHl fluorescence intensity was analyzed with a custom program as described in can have important physiological consequences. The slow postfusion discharge of tPA has been observed by numerous investigators in different cells (Taraska et al., 2003;Perrais et al., 2004;Tsuboi et al., 2004) including vascular endothelial cells (Suzuki et al., 2009). The present study places these observations in a new context. We reveal the surprising discovery that PAI, the physiological inhibitor of tPA, is coexpressed in secretory granules and co-discharged with tPA. We demonstrate with pTIR FM that tPA slows its own fusion pore expansion in the presence (and absence) of PAI, thereby slowing discharge of lumenal contents and creating a neutral pH nanoscale reaction chamber on the cell surface. This chamber permits the covalent interaction of inhibitor and enzyme, and thereby creates a new secreted product with potential intra-and intercellular signaling function.
A c k n o w l e d g M e n t s We are grateful to Mark Warnock for providing expert advice about zymography and antibodies standards for tPA and PAI. We thank Dr. Prabhodh Abbineni for helpful discussions about this work and experimental assistance. This work was supported by National Institutes of Health grants R01-170553 to R.W. Holz and D. Axelrod, R01 HL55374 to D.A. Lawrence, and T32-HL-007853 to K.P. Bohannon.
The authors declare no competing financial interests. Author contributions: K.P. Bohannon conceived, performed, and analyzed experiments and was a major contributor to the writing of the paper. M.A. Bittner conceived, performed, and analyzed experiments and was a major contributor to the writing of the paper. D.A. Lawrence helped design experiments. D. Axelrod developed mathematical methods for analyzing secretion events and contributed to the writing of the paper. R.W. Holz helped conceive of the experiments and contributed significantly to the writing of the paper.
Sharona E. Gordon served as editor. | 2017-11-25T10:31:53.929Z | 2017-10-02T00:00:00.000 | {
"year": 2017,
"sha1": "f1361c09a91b28b63506230799eb21e0facac455",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jgp/article-pdf/149/10/921/1233651/jgp_201711842.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3902ea269fa56f6fab207b4ec36e3fc2268a4a75",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
250677929 | pes2o/s2orc | v3-fos-license | The possibility of nanostructure character in approaching Kondo effect
Based on instability of magnetic structure, a new class of heavy fermions is constructed with a stable local magnetic ion 'Gd'. The lattice constants, D.C magnetic susceptibility and the electrical resistivity measurements in the magnetic unstable intermetallic compounds show; (1) the Instability of crystal structure, as well as high transition temperature "Tc", strongly depends on the conduction electrons concentration. The reduced size effect and the reduction in correlation length, is expected to be the cause of this behaviour as it could be due to the nanostructure character as well as the competition of inter and intra-cluster also (2) the coexistence of Kondo lattice behaviour and magnetic ordering 'reentrant antiferromagnet' for the temperature range of 30 < Tk < 90K with TN = Tmax = 30K and finally (3) the metal-insulator-like behaviour with complete quench of magnetic ordering occur antiferromagnetically named "super paramagnet" at a certain conduction electron concentration.
Introduction
Even though many of the ground-state properties of elemental Gd, as stable s-state with the electronic configuration of 4f 7 (5p6 5s 2 ) 5d 1 6s 2 and the following characters are well defined experimentally [1,2,3], but a numerous experimental and theoretical [2] features of its Intermetallic Compounds, "IMC",Gd 2 X (X=Al,Au) are still contradictory [4,5,6], such as the unusual double magnetic transition, meta-magnetic behavior [7] and their high magnetic moment µ eff =9.6µ B [8]. The Gd element crystallizes in the hcp structure with inter-atomic distance of R ij =3.6 Å while Gd 2 X is orthorhombic with R ij =3. 16-3.6Å. On the other hand, we are aware that the magnetic character of Gd is F.M with Tc≈θ p =293 K (where θ p is the paramagnetic Curie-wise temperatures), the effective magnetic moment "µ eff " of 7.6 µ β and, the zero temperature moment to be µ satur (T=0) =7.63 µ β . These values indicate that the induced polarization of conduction band is at least 0.63 µ β which arises from an inter-band exchange coupling (RKKY-type), between the itinerant 5d/6s conduction band electrons and the localized 4f-electrons [9]. Therefore the nature and character of d conduction electrons, c.e, should change and fluctuate in the "s-d" range where the shouldering of f-local-electrons on the dband is also reported [10]. In order to consider the character of c.e, the exchange correlation energy as well as the exchange fluctuation is investigated on the double phase transition which can be the main cause of; (a) the instability of crystal and magnetic characters, (b) the competition between the RKKY magnetic exchange and the Kondo phenomena and therefore, one of the most interesting points in heavy fermion formation. In this case the energy of ground state depends on the relative magnitude of the RKKY (inter ionic coupling) or the Kondo (on-site coupling) energy. Also in a nano-magnetic system (i.e. where only the Kondo effect dominates) the multi-levels of Kondo energies can be noticed [11] where, the observation of two stages of Kondo energies were reported; The first screening stage with T K1 energy scale is an under-screen Kondo effect which reduces the net spin from 1 to 1/2. In the other (T K2 ) the Kondo effect causes the quench of different spins and makes their value to be S=0. Consequently following to our previous works on the observable Kondo lattice behavior of Gd [12] and shape and c.e concentration dependency of magnetism due to strong inter-planar "a-b" exchange which is a cause of distortion [13], the possibility of nano particle formation should be considered.
Sample preparation
The initial elements were Gd of 4N purity and Al and Au of 5N. The master sample of Gd 2 Au x Al 1-x (x=0.4, 0.3) was prepared by melting these elements together in a conventional induction furnace in pure dry argon atmosphere. Annealing was done at 600 °C during 95 hours. And the X-ray patterns proved that our sample was single phase within the accuracy of the method and had crystallized in orthorhombic Pnma space group of Co 2 Si type structure [13]. By a vibrating sample magnetometer, the DC susceptibility measurements were carried out on the sample in the temperature range of 4.2-300 K while the applied fields were between 100 G to 10000 G.
Results and discussions
A key issue of fundamental researches is to understand the effects of structural distortions as well as defects, on extrinsic magnetic properties which are present in every real nanostructure. The following manifested behaviors forced us to suggest the coexistence of Kondo and nano-structured particles where it can still be questioned; Which one is the cause or source of the other? And whether the exchange interaction or distortion is the main cause of these characters, as both are related to the c.e concentration? The exchange interaction, J ij , which is a function of the topology of magnetic ions│R i -R j │, can be calculated as a function of the lattice parameters ( Å), [12] where, z is the ratio of the number of free electron in unit cell. The calculated results of sign and strength of the exchange interaction, J ij , show a distortion in the topological structure sites of magnetic ions which is due to the change in inter-atomic space of the closer nearest neighbors -which has the least distance from each other, even than the nearest neighbors -in the range of 3.16 Å ≤R c ≤3.6 Å (table 1) named "intra-cluster" and is the main cause of the expected behavior. The reduction of correlation length of Gd (from R C =3.6Å to 3.16 Å) for the 8 closer nearest neighbors with strong exchange interaction, could be the main cause of the formation of intra-cluster interactions (grain size) which could be also the base of isolated nano size formation. The lattice parameters where calculated from the observed X-ray pattern. The results show a considerable enlargement of the unit-cell volume (figure 1), which is mainly affected by the increasing of c.e concentration, in spite of the fact that the ionic radius of R Au +1 >R Al
+3
. It is evident that the lattice constants drastically depend on the c.e concentration and not on the size effect where the parameter c is rigorously expanded and a and b are contracted and expanded respectively in the direction of; (i) the strong distortion of the "a-b" plane which is due to the strong intra-plan exchange interaction of magnetic ions J sh =J in a-b and (ii) the enlargement of c which is caused by the decreasing of exchange between the inter-plans.
Consequently in each point of view the influence of reduction of correlation length on the intracluster region due to the strong intra-atomic exchange or the strong exchange interaction in the "a-b" planes (which is the cause of distortion and linear increase of c-direction as well as decreasing of the inter-planer-exchange), can be the cause of nano crystal formation in cluster. In order to investigate the nano sized grain, the Debye-Scherrer relation (d= 0.94λ/Dcosθ) is applied on the width of lines with the most intensity in the X-ray patterns ( figure 2) where, the size of nano particles is about 20 nm. Needless to state it is an intrinsic property of them and can be affected only by the annealing process. Based on the reference sample (10 µm), the size of nano-crystalline Gd is reported to be 13 nm [14]. Therefore the measured critical point of "x=0.4" is only one isolated point which positions between the two mentioned categories. It is interesting to note that we have observed the Kondo behavior only for this compound (x=0.4) while our calculations of RKKY inter-ionic interactions show exactly at this point the RKKY coupling is at its minimum value [12]. In figure. 3 the variations of susceptibility vs. temperature for x=0.3-0.4 have been illustrated in different magnetic fields. The measurements were made with the powdered samples which were warmed up to 300 K in the applied field, after being cooled down to liquid helium temperature in the absence of field. The cooling in presence of field and that bellow T=100 K in zero field, are applied to observe the thermal and magnetic history. It is evident that the Curie temperature is a broad shouldering and the magnetic structure can be destabilized where, the phase transition is found to be too smeared and a mixture of transitions exists in low magnetic fields. This critical point of instable magnetic character (x=0.4) becomes extremely sensitive to the physical parameters of the sample (annealing process and the applied magnetic field). If this point of x (=0.3,0.4) is closed to the critical value, near the double FM, AFM percolation threshold due to the competition of inter and intra-cluster, it should be the point at which no frustration (or competition) occurs, and then the system can stabilize so that Σε ij J ij = 0. This takes place at a certain c.e concentration and applied magnetic field at which the ferromagnetic region is prevented and the sample behaves paramagnet above the T o =T N =30 K. This behavior above 50 K indicates that the local magnetic moment has a quantum magnetic number equal to zero (m=0). While the temperature reaches to the T K , we can observe a progressive increase of χ (T) vs. decrease of temperature. The falling behavior and the prevented F.M-Curie region observed in the χ (T) curve indicates that, the correlation in RKKY energy is decreasing. This is corresponding to the defined reductions of the cluster size. As it is expected, the weak presence of RKKY follows in a way that; (i) the Kondo effect is shown more effectively [15] and (ii) the more isolated cluster region is due to the reduction in the inter-atomic space from 3.6 Å for pure Gd to 3.16 Å for x=0.4. This anomaly can be attributed to appearing of a virtual bond state which localizes the conduction electrons, and therefore is the beginning of Kondo clouds formation. And the intra-cluster exchange overcomes the inter-cluster and system behaves completely as an isolated clustering region, in this case the resistivity should be high and the Kondo effect should change to complete heavy fermion ( figure. 4).
In fact, the interactions between the Kondo clouds (made of localized electrons) and itinerant electrons form a narrow resonance at the Fermi level named "Abrikosov-Suhl" resonance [16]. It can be concluded that the structural distortion is due to the strong intra-cluster exchange in the range of correlation length, 3.16Å <R c <3.4Å, with 8 more nearest neighbors which can be the cause of isolated nanoparticle formation at a critical concentration of c.e at which; (1)The X-ray diffraction follow the Debye-Scherrer with nano size 20 nm (2) The magnetic phenomena is sensitive to physical parameters where the character of system is changed to the super paramgnetic at which domain wall pining-like can be observed below T o = 30K (3) The heavy fermion like is the character of inter-cluster exchanges. | 2022-06-28T06:14:35.109Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "8005806dc483dca0875fe0c031d691d4cf90692f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/92/1/012127",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8005806dc483dca0875fe0c031d691d4cf90692f",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
230644028 | pes2o/s2orc | v3-fos-license | On truly nonlinear oscillator equations of Ermakov-Pinney type
In this paper we present a general class of differential equations of ErmakovPinney type which may serve as truly nonlinear oscillators. We show the existence of periodic solutions by exact integration after the phase plane analysis. The related quadratic Lienard type equations are examined to show for the first time that the Jacobi elliptic functions may be solution of second-order autonomous non-polynomial differential equations.
Introduction
In the research field of periodic solution to Lienard nonlinear differential equations of the form (1.1)ẍ + f (x) = 0 where the overdot stands for the derivative with respect to time, and f (x) is a nonlinear function of x, it is less usual to notice differential equations with exact periodic solutions. It is again very less usual to find differential equations with exact periodic solutions in terms of trigonometric functions. This makes the Ermakov-Pinney equation unusual and underlines its high usefulness in science and engineering. In this way a lot of applications in classical mechanics as well as in quantum mechanics for example, has been carried out during these decades ( [1], [2]). The Ermakov-Pinney equation (1.2) has been studied in [2] to show the existence of new periodic solutions and non-periodic solutions. In [3] an exceptional Lienard equation with strong and high order nonlinearity is presented. A Lienard equation with a periodic solution in terms of a single trigonometric function, which may lead to a quadratic Lienard type equation with a periodic solution exhibiting harmonic oscillations, and contains several well-known equations like the Ermakov-Pinney equation [4], the Mickens truly nonlinear oscillators and the cubic Duffing equation as special cases, has never been highlighted in the literature despite the well established theory of differential equations, as it is carried out in [3]. According to ( [5], [6]) all Ermakov-Pinney equations may be reduced to using a variable change. One may say that the cubic singularity defines the nonlinear property of the Ermakov-Pinney equation. So a differential equation with cubic nonlinearity may be said of Ermakov-Pinney type. In this perspective consider the Lienard differential equation ( [2], [3]) Making q = 2, yields as equation In view of the above, the equation ( the general solution of (1.5) is written as the quadrature defined by As can be seen, the value of the integral in J could not be known exactly. A new change of variable in terms of trigonometric or hyperbolic functions may be also performed but this does not solve the problem.
However, it shows that the general solutions of some specific equations of the equation (1.5) are not periodic.
The equation (1.5) may be reduced to the form where α = 2n, n is an integer. The equation (1.11) may be of physical importance since it has the structure of truly nonlinear oscillators formulated by Mickens in his book [7], and contains the famous Ermakov-Pinney equation as special case. It is known also that differential equations with power nonlinearities are often encountered in mathematical modeling of physical problems. A vast literature exists on the topic of truly nonlinear oscillators. During the last decades many authors investigated these nonlinear differential equations. As nonlinear differential equations, they have no exact explicit solutions in general. Moreover they could not be solved by the well known standard approximate analytical techniques [7]. So the existence of periodic solutions of these equations is yet under some debate. This particularly, becomes an attractive research problem when the second order autonomous truly nonlinear equation has a singularity at the origin and can have no critical point, necessary condition, according to [8], for a planar autonomous systems to have a periodic solution. It was the case of the so-called pseudo-oscillator investigated in [8]. The authors [8] concluded that such a differential equation has no periodic solution. In contrast to this the author in [9] showed that periodic solution exists, at least a non-smooth solution. The author [9] carried out a theory to build such periodic solutions. In [10] the two general solutions predicted in [8] have been exactly calculated and the authors [10] concluded also to the non existence of smoth periodic solution. The equation (1.11) has a cubic singularity at the origin for a positive integer n, but may have fixed points. For n negative, singularities appear also. Choosing n = 1, that is α = 2, reduces the equation ( For b = 0, the equation (1.12) reduces to the well-known restricted cubic Duffing equation ( [7], [11], [12]) for which it is said that all the solutions are periodic. Contrary to these authors and to several others, it has been shown that such an equation may exhibit non-periodic solution, precisely complex-valued solutions [13].
Putting n = 4, into the equation (
Qualitative properties of solutions
The qualitative properties of solutions to (1.11) are investigated in this section using the phase plane method. Therefore the equation (1.11) is equivalent to the planar autonomous dynamical system The fixed point is defined by y = 0 and . As one may see, for b a = −1, the critical point is real, but for b a = 1, the coordinate x may become complex. From (2.1) one may write The separation of variable leads to By integration, one may obtain the integral curves given by This means, according to the equation (1.6) that the integration constant can be choosen as c = 0, so that the Hamiltonian of the system can be written The Hamiltonian ( where B is the arbitrary constant. The authors [6] observe that the analysis of Lie point symmetries is not adequate for (3.3) which requires, rather than nonlocal symmetry calculation. As the evaluation of nonlocal symmetries may be complicated, the authors [6] apply the Jacobi last multiplier approach to find the solution of (3.1) in terms of time dependent integral. On the other hand, the equation (1.3) is also investigated in [14]. The authors [14] succeed to calculate a general solution of (1. from which, using the previous relation X 2 = b − ax 2 , one may secure the general solutions of (1.3) as where K is an integration constant.
Solution using the auxiliary equation (3.1). Let us consider the generalized Sundman transformation
theory introduced recently in the literature by Akande and coworkers [15]. In fact the generalized Sundman transformation is a powerfull change of variables which allows solving differential equations with a few mathematical manipulations. In the theory introduced by Akande et al. [15] the oscillator harmonic equation to the second order differential equation where A 0 , β, l, γ are arbitrary parameters, g(z) = 0, and ϕ(z) are arbitrary functions of z, and prime denotes differentiation with respect to z. The application of ϕ(z) = ln(f (z)), leads to Putting g(z) = z, and f (z) = z 2 , into (3.13), allows one to obtain Substituting (3.19) into (3.9) yields the general solution to (3.15) as Using (3.20) one may deduce the solution of (3.1) in the form Therefore the solution of (1.3) where b = −1, becomes for ν ≺ 0, then the general solutions (3.23) reduce to (3.24) x(t) = ± 2 1 +
Exact periodic and complex-valued solutions.
3.2.1. Periodic and complex-valued solutions of (1.12). The equation (1.12) is obtained when n = 3, that is when α = 6, from the equation (1.11). Two cases may be investigated.
Periodic solution
For reason of simplicity we choose a = b = 1. In this case the integral J becomes which may be rewritten as [17] (3.26) where 0 ≺ φ ≺ ∞, and c 1 is an arbitrary parameter.
By integration, (3.26) reduces to [17] (3.27) J = 1 3 , and k 2 = 2+ √ 3 4 . Using (3.27), one may write from which one may get In this situation φ may be written as which becomes x α = 1 − X 2 , as a = b = 1, the solution x takes the definitive form This case corresponds to where i is the purely imaginary number. The equation (3.32) gives [17] (3.33) where 0 ≺ φ ≺ ∞, and c 2 is an arbitrary parameter.
The evaluation of the integral in (3.33) leads to The equation (3.34) may be rewritten in the form From (3.36) one may get φ as In the present case, φ = X √ −1 , which is rewritten as φ = −iX, that is X = iφ, so that from which one may secure the complex-valued solution x(t) in the form
Periodic solution: a = b = 1 In this case, the integral J is written in this form Using (1.7) one may get the equation ( [16], [17]) where c 3 is an arbitrary parameter. Therefore, by integration, the equation (3.41) reduces to where 0 ≺ φ 1. From (3.42) one may ensure the following equation which is written in this form From the equation (3.44), one may secure Using the relation φ = X √ b , that is X = φ, the equation (3.45) is rewritten as This condition leads to the equation [17] (3.48) where 0 ≺ x 1. The integral in (3.48) is known as hyperelliptic integral and its evaluation gives [18] (3.49) which may be rearranged in the form Thus, the above shows that under the conditions that a b = 1, and 2 n 4, the explicit general solutions of (1.11) are periodic.
In the sequel of this work, the related quadratic Lienard type equations to the equation (1.13) is examined.
Quadratic Lienard type equations
To determine the quadratic Lienard type equations related to (1.13), consider the change of variable which shows for the first time that the Jacobi elliptic function cn [18] may be solution of second order autonomous non-polynomial differential equations. Now a conclusion of this work may be addressed.
Conclusion
In this paper a general class of truly nonlinear oscillator equations is presented. The conditions of existence of periodic solutions are shown and periodic and explicit general solutions are examined. The general solutions of a well known Ermakov-Pinney type equation are also calculated. Finally it has been shown that the Jacobi elliptic function cn may be solution of second-order autonomous non-polynomial differential equations.
Conflicts of Interest:
The author(s) declare that there are no conflicts of interest regarding the publication of this paper. | 2021-01-06T05:02:30.949Z | 2020-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "70e309f9dc8e5ee7c1b184ce45d142c13b24ca04",
"oa_license": "CCBY",
"oa_url": "http://etamaths.com/index.php/ijaa/article/download/2477/677",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf070561af7ae74d219ff13852470590bc53327e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
201762525 | pes2o/s2orc | v3-fos-license | A step towards Torwali machine translation: an analysis of morphosyntactic challenges in a low-resource language
Torwali is an endangered language spoken in the north of Pakistan. It is a computationally challenging language because of its RTL Perso-Arabic script, non-concatenative nature and distinct words alterations. This paper discusses issues and challenges regarding grammatical structure, divergence in terms of lexicon as well as morphological make-up for the machine translation of a less studied language. It includes creation of NLP tools such as parts of speech (POS) tagger and morphological analyser with HFST which is based on the idea of building lexicon and morphological rules using finite state devices. This work, on which this paper is based, will be a source of Torwali finite state morphology and its future computational growth as electronic dictionaries are usually equipped with morphological analyser and it will also be helpful for developing language pairs.
Introduction
Torwali belongs to the Kohistani sub-group of the Indo-Aryan Dardic languages, spoken in the upper reaches of district Swat of northern Pakistan. It has two dialects (the Bahrain and Chail dialects), with a total of approximately 90,000 to 100,000 speakers.
Torwali is written in a cursive, context sensitive Perso-Arabic script from left to right having unique grammar (morphology + syntax). Being a marginalized and low resource language, there are no robust morphology sources which hinders progress in NLP (Natural Language Processing) tools for Torwali thou there is a digital Torwali dictionary available along with some structured data. This paper discusses an attempt to create a morphological analyzer using HFST from scratch.
In NLP, morphological analysis is used to identify the morpheme and affixes of words in a language and individual words are analyzed into their components. Apart from computational linguistics, there are other uses that require morphological analysis e.g text processing, information retrieval and user interfaces.
Morphologies nowadays are commonly written by using special purpose languages based on finite state technology, one of them is HFST which is based on regular expressions.
Goals
The goal of this study is to create a baseline system that paves way for machine translation of Torwali which will cover: POS tagging Creating a lexc Basic inflection rules using twol (two level rule)
Unicode and input method
Unicode UTF-8 encoding is used as an encoding scheme as XFST/HFST files are always treated as UTF-8.
As an Input tool TRF phonetic keyboard (TRF 2L V1.0) is used which is developed so that users can easily input texts without going on-screen.
Morphological analysis using HFST
HFST-Helsinki finite-state Technology is a framework for compiling and applying linguistic descriptions with finite state methods. Finitestate transducers methods are useful for solving problems involving language identification via morphological processing and POS tagging. There are two principle files in a morphological transducer in HFST, a lexc file which is concerned with morphotactics i-e about the way morphemes are joined together in a word and twol file is used to describe phonological and orthographical alternation rules i-e about what happened when the morphemes are joined together.
For morphological analysis of Torwali HFST/finite state transducers are chosen because Torwali language is quite immature for statistical machine translation and also this implementation is done using Apertium, which is an open source machine translation platform in which HFST can be used.
LEXC
Lexicon Compiler or LEXC is a finite-state compiler also called a lexical transducer that reads set of morphemes and their morphotactic combinations in order to create finite-state transducer of a lexicon. LEXC contains morphemes grouped in sub-lexicon sets which in turn contain finite strings separated by ':' and a continuation class (a lexicon name).
TWOLC
TWOLC, Two-Level Compiler is a two level rule compiler used for compiling grammars of two levels into finite state transducers sets. Two level rules are constraints on lexical word forms corresponding to surface forms, It describes morphological alternations such as ڙینگ:ڙینگو (weep: weeping), بن:بنو (say: saying). It takes surface forms produced by LEXC and applies rules on them; the rules vary depending on morphological alteration of stem, morphologically or phonologically conditioned deletion of suffix, morphologically or phonologically conditioned insertion, morphologically or phonologically conditioned symbol change.
Torwali Morphology
Torwali has a unique morphology because it is basically a fusional language which uses several strategies like stem modification, reduplication and existence of words in inflected form, derived form, compound form and root form. The morphological analyzer separates root and suffix morphemes in all lexical entries i-e in أمیژیل and ,لناچا /أمیژ/,/لن/ are roots and /یل/,/چا/ are suffixes. The purpose of this section is to discuss Torwali morphology and its implementation in HFST for main grammatical categories of Torwali i-e nouns, verbs, adjectives and pronouns.
Nouns
In Torwali, nouns are inflected for number and case and the stem can be joined by an optional plural suffix and an optional oblique case marker. Torwali uses several strategies to mark plurality but the primary morphological method is tone along with verb agreement like for most of the singular nouns have a tone with rising pitch from low-to-high and their plural counterparts have a tone with low pitch. Due to the issue related to representing tone, Torwali words which use tone to mark plurality the following approach is used where singular/plural for masculine and feminine are handled in a single paradigm. The above output marks singularity and plurality for words having tonal change but have no description about the tone's pitch. To make plural oblique of nouns a suffix /e/, /ے/ is added to the stem as in the words; ,/خار/ /خارے/ and ,/شان/ ./شانے/ Reduplication is another strategy to communicate plurality and intensity but not in the same way as tone does. For instance, مال/ ,/گال گیل/ ,/میل ُن/ چ ُن ,/چ / پہٹ ,/پہٹ / ٔ
For noun inflection which undergoes stem modifications, there is a lot of complexity regarding standard rule formation for them; here is a general conclusion: For majority of masculine nouns the vowel changes form \a\ to \ə\ and for feminine nouns \a\ to \ae\ but some masculine and feminine nouns behave differently. For morphological alteration of the stem the following rules must be implemented using twolc, taking the surface forms produced by lexc.
Verbs
Torwali verbs inflect for tense, aspect, mood and gender and most of the verb forms make gender and number distinction only, no distinction for person. Torwali has three tenses: present, past and future. The suffix /i/, /ی/ can be used to mark feminine singular forms and present tense on feminine singular forms, /u/, /و/ as masculine singular suffix and present tense on masculine singular forms with the suffix /i/, /ی/ being used for present tense on plural forms too. For infinite verbs the suffix /u/ is added to the stem.
To make a test, only present tense on masculine and feminine, infinitives, transitive and intransitive forms of verb are selected and in the continuation class the suffixes are added to mark inflection associated with each of them which are defined with suitable tags as shown below.
The analyzer analyses these verb forms in the following way when compiled with hfst-lexc and tested with hfst-fst2strings: From the above output it is concluded that the following implementations can be done. These rules can be applied to verbs whose stem ends with a consonant.
Adding a suffix ُد/ /د to represent past tense on infinitive verb. Adding a suffix /نین/ to mark future tense on finite verb. Adding suffix /سأت/ to mark inceptive of infinite verb. For present perfective on masculine singular adding suffix /و/ and /ی/ for both plurals and feminine singular. Suffix /ودو/ for masculine singular, /یجی/ for feminine singular and /یدی/ for plurals to mark present perfective on finite verbs Suffix /وشو/ for masculine singular and /یشی/ for both feminine singular and plurals to mark past perfective on finite verb. Verbs ending with a vowel inflect differently they sometimes behave like verbs with consonant ending stems with a minor modification some plural forms tend to have //أwith the stem before applying the plural suffix; however most of the verbs with vowel-final stem follow different configurations.
Adjectives, Pronouns, Adverbs and closed classes
In a similar way Adjectives, Pronouns, Adverbs, Postpositions, Conjunctions and Interjections have been implemented with the same level of detail.
Result and Conclusions
This work presents a straight forward implementation of Towali morphology analyzer using HFST, which implemented the basic inflections of nouns, verbs and other POS like adjectives, adverbs, pronouns and postpositions being tagged. We found HFST a good choice for implementing Torwali Morphology for now as this is the first ever attempt to implement Torwali morphology using FST. However; to develop a full fledge Morphological analyzer more work has to be done. The major problem which we have to face is the random stem changes in nouns, variation in nouns using change of tone, distinct behavior of vowel ending verbs and nouns. There is a need to learn more about Torwali Morphology regarding the affixes and varying stems and more rules needs to be defined.
Future work
This work could further be enhanced to following extensions depending upon the possibility: Addition of missing diacritic marks to words Technique to interpret tone of nouns to indentify singular/plural nouns by its tone. Algorithms to differentiate phonetically similar words. A comprehensive implementation of Torwali syntax. | 2019-08-25T18:53:48.453Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "52d68d1108481b93b3aeb3a1daa84defa2d40546",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "52d68d1108481b93b3aeb3a1daa84defa2d40546",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234268029 | pes2o/s2orc | v3-fos-license | Data-related and methodological obstacles to determining associations between temperature and COVID-19 transmission
More and more studies have evaluated the associations between ambient temperature and coronavirus disease 2019 (COVID-19). However, most of these studies were rushed to completion, rendering the quality of their findings questionable. We systematically evaluated 70 relevant peer-reviewed studies published on or before 21 September 2020 that had been implemented from community to global level. Approximately 35 of these reports indicated that temperature was significantly and negatively associated with COVID-19 spread, whereas 12 reports demonstrated a significantly positive association. The remaining studies found no association or merely a piecewise association. Correlation and regression analyses were the most commonly utilized statistical models. The main shortcomings of these studies included uncertainties in COVID-19 infection rate, problems with data processing for temperature, inappropriate controlling for confounding parameters, weaknesses in evaluation of effect modification, inadequate statistical models, short research periods, and the choices of research areal units. It is our viewpoint that most studies of the identified 70 publications have had significant flaws that have prevented them from providing a robust scientific basis for the association between temperature and COVID-19.
Introduction
The coronavirus disease 2019 (COVID-19) pandemic, which is ongoing at the time of writing, has attracted increasing research interests (Gong et al 2020). An understanding of the driving factors of COVID-19 transmission is urgently needed owing to the extensive public health implications (Kraemer et al 2020). Whether warm temperatures suppress the spread of COVID-19 has become a hot topic of discussion that has attracted considerable social media and political attention worldwide, since preliminary laboratory studies indicated the high temperature can lower the survival of COVID-19 virus (Baker et al 2020, NAS 2020. Inputting the keywords 'temperature' and 'COVID-19' into the Web of Science yielded hundreds of results (as of 21 September 2020), but the main findings of these publications were not consistent (Fang et al 2020, Jüni et al 2020, Pan et al 2020. As a large proportion of this research had been conducted in a rush (Glasziou et al 2020, Heederik et al 2020, its findings may be more likely to generate public confusion than to contribute to scientific knowledge (Zeka et al 2020). A recent study criticized all of the studies associated with ambient air pollution and COVID-19 incidence and mortality, arguing that they were susceptible to significant sources of bias (Villeneuve and Goldberg 2020). Compared with studies on air pollution associated with the COVID-19 pandemic, more research has been conducted on the correlations between temperature and COVID-19 transmission. Data-related and methodological concerns are particularly prominent in the latter studies, inhibiting their efforts to explicitly elucidate the complexity of the role of temperature in COVID-19 spread. In this study, we first identified relevant reports and then attempted to explore the adequacy of data and methods used, rather than concluded that whether temperature could influence the COVID-19 transmission or not.
Methods
To identify articles associated with temperature and COVID-19 spread, we searched Science-Direct (www.sciencedirect.com/search), PubMed (https://pubmed.ncbi.nlm.nih.gov/), and Web of Science (www.webofknowledge.com) using the search terms 'COVID-19' or 'SARS-CoV-2' and 'temperature' and 'association' through 21 September 2020. After examination of the titles, abstracts, and full text, 70 studies remained, as illustrated in figure 1. Since we excluded papers without peer review, we did not use other search engines to examine pre-printed literature posted on the Internet.
Research status
The details of the 70 retrieved articles, including their locations, study design, adopted models, study period, confounding variables, and main findings, are presented in supplementary material table S1 (available online at stacks.iop.org/ ERL/16/034016/mmedia). Approximately 35 reports indicated a negative association between temperature and COVID-19 transmission (table 1), whereas 9 studies suggested a positive association. Some researchers demonstrated that such associations were piecewise, or found no clear link between temperature and COVID-19 spread. Regarding location, approximately 73% of the studies (10 in one city and 41 in multiple regions) had been conducted within one country. Of these 51 studies, 15 had been conducted in China; this is unsurprising, because COVID-19 was first detected in Wuhan, China. Seven studies had been conducted in the U.S. and India, followed by four in Spain, three in Brazil, and three in Japan (figure 2).
COVID-19 infections
As shown in table 1, the daily new or cumulative COVID-19 counts were the most commonly adopted dependent variables, of which most were from official health departments. During the early stage of the COVID-19 outbreak, the underreporting of COVID-19 infections and deaths due to the lack of adequate testing in most countries might have influenced the determined temperature-associated effects (Chatterjee 2020). Furthermore, testing ability commonly increases as a pandemic evolves (Tromberg et al 2020), thereby inducing bias in the time-series analysis. Nonetheless, few of the reports we retrieved considered the effects of testing ability in their analyses (Pan et al 2020).
There are marked discrepancies in testing ability between regions worldwide (https://ourworldi ndata.org/ coronavirus-testing#testing-for-covid-19background-the-our-world-in-data-covid-19-testing -dataset). Testing coverage is particularly low in some developing countries. Such inequalities should be inspected carefully because they may cause considerable estimation errors in ecological studies (Iqbal et al 2020, Pan et al 2020. In addition, uncertainties associated with asymptomatic COVID-19 infections or variations in silent transmission between regions can significantly modify the estimation of the associations between temperature and COVID-19 spread (Jia et al 2020).
The changing definitions or misclassification of COVID-19 during the pandemic also affected the COVID-19 counts. Using China as an example, the case definition was initially narrow and was broadened later to include more infection cases as knowledge increased (Tsang et al 2020). However, most authors did not consider the effects of changing the case definition in their statistical analyses.
Study design
Of the identified 70 publications, there are 24 ecological studies and 45 time-series studies (table 1). Particularly, the time-series studies can be further divided into two types: temporal (31) and spatio-temporal (14) studies. Each study type has inherent possible biases (Villeneuve and Goldberg 2020), i.e. the ecological fallacy or cross-level bias in the ecological studies. The study design is particularly crucial in relation to the statistical models and confounding variables. For example, in most temporal studies, the correlation analysis was commonly adopted, without any confounding variables. Both the ecological and time-series studies can be analyzed by regression and correlation analysis. Some statistical models, including the (S)ARIMA approach, are widely used in time-series analysis.
Statistical model
Correlation analysis was conducted in more than 30% of the reports. In particular, of the 21 studies that used correlation analysis, 13 implied a negative association, whereas 9 exhibited a positive association (table 1). The conclusions of the correlation analyses were not always solid because they did not control for any other confounding factors, which might have masked the true effect. Over the last 6 months, the temperature has increased or decreased owing to seasonal changes. Meanwhile, the spread of COVID-19 has in some cases been strongly suppressed by strict policy interventions . Thus, although most of the reviewed authors declared that their correlation analysis results did not indicate causality, these publications may still confuse public opinion regarding driving factors. Regression models were also widely used in the retrieved studies. Most of the researchers had conducted time-series analysis, whereas some did not follow the accepted methods of time-series analysis. We noted that multiple linear analysis was utilized in some studies (Haque andRahman 2020, Ladha et al 2020), implying that the error in daily new cases was assumed to have a normal distribution. For count data (such as infection cases), negative binomial, Poisson, and zero-inflation regression models are more suitable to avoid overdispersion (Villeneuve and Goldberg 2020).
Besides correlation and regression analyses, some of the researchers used machine learning techniques (Malki et al 2020, Pramanik et al 2020. However, we found the methodologies of these studies are not easy to follow (Malki et al 2020, Pramanik et al 2020, and their conclusions evinced insufficient understanding of the mechanisms involved.
The factor of temperature
Another concern is how to choose a sound factor to represent temperature. In the identified studies, the authors used the maximum, average, or minimum daily temperature (Goswami et al 2020), diurnal temperature range, moving average (Xie andZhu 2020, Qi et al 2020a), lagged effect (Briz-Redón and Serrano-Aroca 2020) and cross-basis of temperature (Runkle et al 2020, Shi et al 2020, and yearly or monthly average temperature (Mandal andPanwar 2020, Wei et al 2020). However, at this stage, the differences in model performance between these approaches remain unclear. Furthermore, as a large proportion of the publications did not include sensitivity analysis or explain the reasons for their choices, we cannot determine whether these choices were based on statistical significance, scientific evidence, or other factors. In addition, the median incubation period for COVID-19 is estimated to be 4-5 d, and incubation can extend to 14 d (Bi et al 2020). Together with the additional days for laboratory confirmation, using the temperature on the day of case confirmation is not appropriate.
Meteorological factors and air pollutants
Approximately 25 studies did not include any confounding variables, and most of these studies adopted correlation analyses. Most confounding variables can fall into two types: the time-varying factors (meteorological factors, air pollutants, policy intervention, and others) and location-varying factors (e.g. demography, socioeconomic status, and population). Of the identified 70 studies, different confounding factors pose threats to different types of studies. In particular, time-varying risk factors are threats to both types of time-series studies, whereas Note: a some reports did not have detailed information on the study period. b Some reports used multiple methodologies or confounding variables. c Some publications used multiple dependent variable. location-dependent factors are threats to ecological and spatio-temporal but not purely temporal timeseries studies. With respect to time-varying factors, we noted that approximately half of the retrieved reports controlled for meteorological factors, particularly humidity, wind speed, and visibility (table 1). However, similar to the measurement for temperature, the lagged effects of meteorological factors should be considered. Some studies conducted at the country or global scale just averaged the temperature, the humidity, or other meteorological factors (Kumar andKumar 2020, Sarmadi et al 2020), even though the weather conditions in some countries, such as the U.S., Russia, India, and China, vary considerably. In contrast, the authors incorporated regional measures for nationwide COVID-19 counts (Iqbal et al 2020, Sarkodie and Owusu 2020, Sarmadi et al 2020, because COVID-19 is prone to outbreaks in mega-cities, particularly with more people traveling to and from international locations (Dong et al 2020a). Thus, appropriately weighting the corresponding meteorological factors between regions is crucial to disentangle the temperaturerelated correlations. Some of the retrieved studies also used air pollutants as covariates, such as particulate matter, sulfur dioxide, and nitrogen dioxide (NO 2 ) (Adhikari and Yin 2020, Azuma et al 2020, Jiang et al 2020). The major objective of these studies was to explore the correlations between exposure to air pollutants and COVID-19 transmission, considering air pollutants are widely associated to human health . Some scientists have argued that such analyses add incremental value during an active pandemic (Heederik et al 2020, Villeneuve and Goldberg 2020).
Policy interventions
Prior studies have demonstrated that strong policy interventions, including face masks, social distancing, hand hygiene, travel or work restrictions, and community isolation, can greatly lower the transmission of COVID-19 (Chu et al 2020, Zhou et al 2020. However, only four of the retrieved studies controlled for social distancing , Rubin et al 2020, non-pharmaceutical interventions (Fang et al 2020), or strict COVID-19 measures (Ozyigit 2020) in their analyses. In a time-series analysis, policy intervention would bend the growth curve in the later period of COVID-19 spread and also decrease the reproduction number or prevent the number of positive counts (Davies et al 2020). It is questionable whether robust conclusions can be generated by models that omit policy interventions. Existing studies have already determined that the stringency indexes for governments' responses (e.g. social distancing, school closing, and public event cancellation) vary substantially between regions (Ashraf 2020, Hale et al 2020. This spatial inequality could reshape the curve between temperature and COVID-19 spread. However, none of the studies we reviewed evaluated how the effects of spatial variations in the responses of governments influenced the associations between temperature and COVID-19, especially in ecological studies.
Location-varying factors
Approximately 50% of the publications included the effects from location-varying factors, such as demographic factors, socioeconomic factors (e.g. race, occupation, education, income, age structure, number of hospital beds, and life expectancy), and spatiotemporal factors (e.g. number of days since the first confirmed case), especially in the ecological and spatio-temporal studies. These time fixed factors that vary over locations may modify the association of COVID-19 with temperature in multi-location temporal studies. Research has shown that the age structures of North Americans and Europeans increase their vulnerability to COVID-19 mortality (Esteve et al 2020), which may be attributable to the relatively high proportions of older people in these regions. Positive correlations were also demonstrated (figure S1) between the proportion of older people, testing number, life expectancy, and gross domestic product per capita worldwide. Thus, researchers need to carefully investigate the potential collinearities between the confounding variables before data analysis. Some data processing techniques, such as principal component analysis and stratified analysis, may be required prior to further analysis.
Study period and duration
Some ecological studies utilized the confirmed or accumulative COVID-19 counts on a specific day as the dependent variable (Gupta et al 2020, Sarmadi et al 2020. However, these COVID-19 data on a specific day may be greatly influenced by the initial status, growth rate, and calendar date of the first case. Furthermore, the exposure duration of more than 50% of the studies was in the range of 1-3 months or less than 1 month (table 1). Some studies may only select a short study period before the execution of policy intervention, and this short study period raises another issue: are data from a short study period sufficient? Although there is no uniform criterion to determine the minimum size for time-series studies, it is questionable whether a study period of 1-3 months is sufficient. For example, the determination of exposure to air pollution and mortality generally requires a study period of multiple years to control for the long trend of adverse health effects and address the seasonality of temperature , Dong et al 2020b.
To some extent, it is a paradox to researchers. At the early stage of pandemic, a number of countries or regions were still in the stage of epidemic growth, and the growth curve may be less influenced by policy intervention. However, an inherent question is the data that may be not sufficient to account for temporal trend. Contrastingly, if longer study period is adopted, associated parameters might be heavily determined by policy intervention, demographic factors, and socioeconomic factors than by temperature.
Research areal unit
The authors of the retrieved studies investigated temperature and COVID-19 transmission at the community, city, provincial or state, country, and global scales. One study using the daily number of new cases nationwide in India revealed a positive association (Kumar 2020), whereas provincial data in India suggested that temperature was negatively associated with the number of COVID-19 cases (Goswami et al 2020). This difference may have been due to the modifiable area unit problem (MAUP), which is a form of statistical bias that arises when incorporating point measurements into districts. A recent study also found that the correlations between COVID-19 mortality and NO 2 were contradictory when aggregated at different levels, indicating that the MAUP should be investigated when exploring the environmental determinants of the COVID-19 pandemic (Wang and Di 2020).
Other issues
Other limitations were also noted. First, none of the existing studies considered how the infectivity of the virus changed during the COVID-19 outbreak, although this is an important time-varying factors. In addition, the geographical variations in the viral strains with distinct infection capabilities may trigger biases in ecological studies. Second, some of the authors adjusted the new/cumulative COVID-19 cases using the baseline on previous days (Zhu and Xie 2020), whereas others did not (Runkle et al 2020, Qi et al 2020b. Similarly, the population was not adopted as an offset in all of the studies (Shi et al 2020, Qi et al 2020b. These variations in the data process may have hampered conclusions as to how temperature affects the spread of COVID-19. Meanwhile, in some cases, COVID-19 infections stem from clusters (for example, the worker in the food/meat processing industry or market) rather than the whole population, which should be excluded or specified in statistical analysis.
Investigating the role of temperature in the COVID-19 pandemic is important but challenging. Laboratory studies have observed that the high temperature may reduce the survival of COVID-19 virus (Baker et al 2020, NAS 2020, while filed studies did not consistently validate this conclusion. Our suggestion is that the study period should be taken before the execution of policy intervention, since the policy intervention could strongly bend the growth rate of COVID-19. In addition, comparing to ecological or time-series studies, a longitudinal study with individual data at global scale promises to better address the association between temperature and COVID-19 transmission. Meanwhile, researchers also need to carefully examine the influence from all potential confounding variables.
Also, we recommend that determining the influence of temperature on COVID-19 transmission can be comprehensively evaluated after the ending of this global pandemic. Till now, the second wave of COVID-19 is still developing rapidly in some countries, implying that temperature may be unable to significantly suppress COVID-19 transmission. A very recent study concluded the weather contributed to 17% of the variation in the maximum COVID-19 growth rate, and UV lights rather than temperature is the most strongly associated with lower COVID-19 growth (Merow and Urban 2020). However, authors also pointed out that the uncertainty remains high and aggressive policy interventions are likely be needed (Merow and Urban 2020). Prior studies indicated that the variations of population susceptibility is the driving factor of the COVID-19 pandemic, and warm temperature may be not anticipated to substantially limit the COVID-19 growth (Baker et al 2020, Su et al 2020.
Conclusion
This study revealed that data-related and methodological issues mainly concerned data reliability and processing, and the inherent uncertainties in the data decreased the reliability of the statistical analyses. Since the COVID-19 pandemic begun, an enormous quantity of manuscript submissions from the researchers in different countries or regions often led to the need to perform the reviews in rush, which may be also responsible for some data and methodological flaws, since many details might have been overlooked in these review processes in order to provide the newest conclusions regarding the transmission and control of COVID-19. From our point of view, most of the 70 peer-reviewed studies had significant flaws in their methodologies or data design, requiring greater epidemiological rigor to yield robust conclusions. Here we also encourage authors, reviewers, and editors to work together to more closely scrutinize relevant research, aiming to produce studies with high-quality. With respect to COVID-19 transmission, focusing more on the effectiveness and optimal range of interventions, optimal strategies for reopening the economy and outdoor events, protective materials, and tracing the sources of COVID-19 may be better assist in the global fight against the COVID-19 pandemic.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).
Acknowledgments
This study was financially supported by fundings (Nos. GWTX05 and SWJC05) from the National Institute of Environmental Health (NIEH), Chinese Center for Disease Control and Prevention (China CDC). We thank Professor Xiaoming Shi at NIEH, China CDC for his valuable guidance and tremendous help for this study. We thank anonymous reviewers for their insightful comments and constructive suggestions. | 2021-03-29T23:26:42.041Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "d879209c8aa8a7cf2a5c441e9d42c73da7240205",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/abda71",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "02e411c56773389283700fd68a793d161a546536",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Psychology"
]
} |
237844260 | pes2o/s2orc | v3-fos-license | Impact of health educational intervention among pregnant and lactating mothers in a rural field practice area of Bagalkot: A non randomized interventional trial without control
: Anemia is very common in Indian subcontinent. In India, according to National Health and Family Survey (NHFS-4), the prevalence of anaemia in India among adolescent girls, 15-19 years is 54.1%. Women during pregnancy are more vulnerable for anemia not only because of the synergistic effects of physiological increase in plasma volume (hemodilution) but also because of increased demand and poor bioavailability of iron in the food.To find out the impact of health educational intervention on anaemia among pregnant and lactating women.A non-randomized interventional trial without control study was conducted in the Rural field practice area of Department of Community Medicine, S.N. Medical college, Bagalkot with the Sample size for pregnant and lactating women is 153, for the period of one year (March 2018- June 2019).: There is increase in level of knowledge among pregnant and lactating women regarding anemia after health education intervention and was found to be statistically significant by paired t-test. Health education is one of the cost-effective method in improving the knowledge among pregnant and lactating mothers.
Introduction
Women during pregnancy are more vulnerable for anemia not only because of the synergistic effects of physiological increase in plasma volume (hemodilution) but also because of increased demand and poor bioavailability of iron in the food, predisposed by social factors like preferential feed for men, being deprived of good food, workload of household. 1 The attitude and knowledge of the pregnant women about anaemia and supplements is probably the missing link and is an important factor as a barrier or motivation for intake of iron supplements which have been made freely available to all pregnant women. 2 "Knowledge is the springboard for action." It is believed that improving awareness motivates behavioral change and it is possible that limited knowledge about anemia interferes with ANC attendance, IFA supplements use, dietary practices, and the use of anti-helminths medicine. 3 That is why one of the most effective steps to reduce the prevalence of anemia during pregnancy is health promotion, which is the process of enabling people to improve their health through providing information, health education, and skill training. 3 find out the outcome of intervention in the form of health education.
Objective
To find out the impact of health educational intervention on anaemia among pregnant and lactating women.
Study design
Non-randomized interventional trial without control.
Study setting
The study was conducted in schools belonging to Rural field practice area of Department of Community Medicine, S.N. Medical college, Bagalkot.
Description of rural field practice area
It is a well-equipped RHTC with very good infrastructure, well connected by road, east wards from Bagalkot city covering a population of 19119 located 20kilometeres from the medical college. There are 14 anganwadis under this RHTC field practice area. Sample size for pregnant and lactating women is calculated based on the NFHS-4 survey. 1 By taking the prevalence of anemia among lactating women as 48.1%.
Inclusion criteria
Pregnant and lactating women residing in the field practice area and are willing to give consent.
Exclusion criteria
1. Those who are suffering from chronic illnesses. 2. Loss to follow up.
Sampling technique for pregnant and lactating women
All the anganwadis were identified. 153 pregnant and lactating women were identified by the records of the anganwadis. Line listing of the houses, of pregnant and lactating women was done. The first house was selected by lottery method. Subsequent houses were selected continuously, one by one according to Anganwadi center, till all the 153 women were covered in the pretest. 145 pregnant and lactating women were available for the posttest interview. Those who were not found in the first visit, were followed up and subsequent two visits were done to collect the data. 6. Method of collection of data for pregnant and lactating women.
Duration of the project
Institutional Ethical clearance was obtained After taking informed consent, baseline sociodemographic information regarding pregnant and lactating women was collected.
Topics for anaemia awareness was allocated to the program management unit team members to prepare in the local language to train the link workers during the training session.
The subtopics under anaemia were: 1. Introduction and signs and symptoms of anaemia. 2. Sources of iron rich foods 3. Diagnosis and complications 4. Management of anaemia (iron and folic acid and albendazole) 5. Prevention of anaemia.
Link workers were trained at Rural Health Training Centre, Shirur by using integrated health education methods such as power point presentations, posters and practical demonstrations in vernacular language i.e. in Kannada.
A predesigned pretested semi-structured questionnaire was prepared in Kannada language The questionnaire contained two parts: The pregnant and lactating women were divided into five groups; each group consists of 35 -40 members. first group was called on first week at RHTC Shirur. In the first visit pre-test was done and health education was given regarding anemia on the same day by project management team. Then the second group was called in the second week and so on till the fifth group. Continuation of second and third health education sessions were done. 4 t h session was conducted for post-test evaluation.
Data collection reporting was done by the link workers was reported to the project management unit and later reviewed. 6.1.
Phase 2
Activities undertaken in this period were,
Monitoring visits by Program Management
Unit (PMU) were done at regular intervals, for reinforcement and monitoring, if they implemented the knowledge from the reinforcement sessions which was done, once every fortnight.
Results
We found that more than 50% of the study participants were in the age group of 21 to 25 years, 89% of the participants were Hindu by religion and more than 50% come from nuclear family. With respect to their education, less than 9% are illiterate, 12% have degrees and 21.4% of them completed either diploma or PUC. When it comes to occupation, 80% of them are housewives, 11.7% are unskilled workers. 70.3% of them belong to class IV & V socio-economic class and 25% belong to class III according to BG Prasad classification. We found that there is increase in level of knowledge among pregnant and lactating women regarding anemia after health education intervention and which was found to be statistically significant on paired t-test.
Discussion
Age range of pregnant women was 21-34 years. Mean age of the pregnant women was 26.6±0.9years. All women belonged to social class 3. All women are literate. Out of 100, 32 women were graduate, 52 women had completed higher secondary schooling and 16 women had completed primary schooling. Out of 100 women, 23 women were working while 77 women were housewife. 4 Most (n = 212, 62.4%) respondents were 20-29 years of age, with mean age of 25.6 (SD ± 5.6), married (n = 288, 84.7%), had a secondary level of education (n = 180, 53.4%), unemployed (n = 167, 49.1%), and earned less than USD 100 per month (n = 305, 93%). Whereas only 5.9% (n = 20) had attained tertiary level of education, among the 28.2% (96) that were in employment, only 3% (n = 10) were formally employed. 5 We found that more than 50% of the study participants were in the age group of 21 to 25 years, 89% of the participants were Hindu by religion and more than 50% come from nuclear family. With respect to their education, less than 9% are illiterate, 12% have degrees and 21.4% of them completed either diploma or PUC. When it comes to occupation, 80% of them are housewives, 11.7% are unskilled workers. 70.3% of them belong to class IV & V socio-economic class and 25% belong to class III according to BG Prasad classification.
Out of total, only 23% pregnant women had baseline knowledge regarding signs and symptoms of anemia. The same knowledge has been increased to 66% after the intervention and the increase in the knowledge was very significant (p<0.001). 4 Yassin et al in Alexandria, Egypt where 61.7% of the respondents were found to have poor knowledge of dietary practices in pregnancy. 6 However a contrary finding was reported by Zeng on the knowledge of nutrition and related dietary behaviors among pregnant women, where 74.9% of the respondents showed good knowledge of dietary practices during pregnancy. 7 The change in the maternal nutritional knowledge score on anemia and iron rich foods was significantly high in the intervention over control group. 8 The post-test means of hemoglobin F (1, 132) = 122, p-value <0.001, and hematocrit levels F (1, 132) = 373, p-value <0.001, were significantly different and higher in the intervention group(pictorial handbook) compared to the control group. Similar results were found in knowledge, food frequency score, number of IFA intake (with p-value <0.001). 9 Women from households without a functional radio(health education) were 2.07 times more likely be anemic (95%CI, 1.08-3.00) compared with women from households where there was a functional radio. 10 We found that there is increase in level of knowledge among pregnant and lactating women regarding anemia after health education intervention and which was found to be statistically significant on paired t-test.
Conclusion
In the present study, we found that there is increase in level of knowledge among the pregnant & lactating women regarding anemia after health education intervention and which was found to be statistically significant. Hence health education is one of the cost-effective methods in improving the knowledge.
Recommendations
1. In the present study, health education sessions are conducted by the trained link workers and PMU on anaemia among pregnant and lactating women, we have found a significant increase in the knowledge.
Hence similar health education sessions should be continued on regular basis to update the knowledge. This will help in better outcome of the pregnancy and anaemia related complications. 2. Similar studies can be conducted in a larger population so that it will benefit many pregnant and lactating women in rural areas.
Source of Funding
None.
Conflict of Interest
None. | 2021-09-28T01:08:49.262Z | 2021-07-15T00:00:00.000 | {
"year": 2021,
"sha1": "d660def504f286811e526a04045e9e57e4ea7866",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijfcm.org/journal-article-file/14242",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "55a0705f9a0ed60eed03bf5b270e9ae61db67b48",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204882972 | pes2o/s2orc | v3-fos-license | Occupational voice is a work in progress: active risk management, habilitation and rehabilitation
Purpose of review The current article reviews recent literature examining occupational voice use and occupational voice disorders (January 2018–July 2019). Recent findings Our understanding of the prevalence of voice disorders and work-related vocal use, vocal load and vocal ergonomics (environmental and person influences) across different occupations is continuing to build. There is encouraging evidence for the value of intervention programs for occupational voice users, particularly of late with performers, teachers and telemarketers. Education and prevention programs are emerging for other ‘at risk’ occupations. Summary Occupational health and workforce legislation does not adequately acknowledge and guide educational, preventive and intervention approaches to occupational voice disorders. Voice disorders are prevalent in certain occupations and there is an urgent need for research to support occupational voice health and safety risk measurement, prevention and intervention. Large population-based studies are required with a focus on the health and economic burden of occupational voice disorders.
INTRODUCTION
Occupational voice disorder literature is expanding with a call for improved occupational health and safety standards and legislation to protect voice [1,2]. Many occupations have been identified as at-risk for the development of voice disorders as a consequence of their inherent work conditions [3,4 && ]. While such studies are valuable in identifying 'who' is at-risk and in the exploration of possible influences, there is a lack of recent epidemiological information for occupational voice disorders in the general nontreatment-seeking population and what we do have is potentially outdated [5,6,7 & ]. Despite the high prevalence of occupational voice disorders, the WHO neglects to itemize voice disorders as a potential occupationally related disease or condition [8]. It is also difficult to determine where voice disorders fit within the existing criteria for work-related disease (communicable and noncommunicable) and injuries (intentional and unintentional). This may in part be due to the multidimensional nature of voice disorders as well as the inherent difficulty in measurement and in establishing an operational definition of vocal injury.
Definitions of work-related voice disorders or vocal injuries may vary across geographical location according to relevant legislation, terminology and context. Yet, any speech pathologist understands the enormity of the occupational voice-user population whereby voice is a critical occupational tool and no voice equals no work today -singers, stage performers, sports coaches, sales assistants, teachers, lecturers, lawyers, telephone operators, call centre workers, receptionists, priests and health professionals. Speech pathologists witness first-hand the extreme financial repercussions of voice disorders as well as the impact on social and professional identity.
There is a longstanding assumption of a causal relationship between heavy voice-use and the development of voice problems. More recent research however, suggests that the relationship is more complex. There are people working in heavy voice-use occupations who do not experience vocal difficulties. Many other environmental and contextual factors (coined voice ergonomics) have been proposed to exert an effect [9
&&
]. There has been a shift toward the exploration of these occupationspecific environmental factors as well as person factors, such as vocal fitness, as determinants of vocal survival in the workplace especially for those with sustained heavy load [ ] presented to the Australian Government on the impressive outcomes from a largescale 'voice care for teachers' program spanning over 5 years (n ! 1500 teachers). In purely economic terms for the employer, a saving of $500 000 (AUD) was estimated due to reduced voice-related sick leave.
This current opinion provides an overview of the articles that have published in the last 18 months (January 2018-July 2019) on the topic of occupational voice. We cover 'at-risk' workforce groups, work-related influences on vocal health, risk measurement and intervention, as well as considerations for the future.
OCCUPATIONAL RISK
Risks of vocal harm in those using their voice directly in performance of work duties needs to be understood to provide preventive strategies and early interventions aimed at minimizing development of vocal pathology.
Although most articles describe voice disorder prevalence or list vocal symptoms, there is a recent focus on work-related communication and environmental profiles in specific occupations [ ]. These types of studies may lead to useful insights for preventive and rehabilitation programs for specific populations.
Professional voice users
Chitguppi et al. [52] propose a nomenclature for people who rely on their voice for their occupation and suggest such voice users should be split into
KEY POINTS
Occupational voice users exhibit increased risk of dysphonia and suffer economic and psychosocial consequences.
Increasing understanding of environmental and personal profiles of specific occupational groups is developing.
Risk measurement is critical to evaluating and monitoring voice disorders in the work place.
Risk management approaches including group therapy and community-based education program are gathering support across occupational groups.
Researchers need to consider longevity of voice use with specific attention to pediatric professional voice users and their future as well as maintenance of occupational voice use in the aging workforce.
speaking and nonspeaking voice professionals. This may prove useful for determining relative prevalence figures for work-related voice disorders among each group as current information is confounded by differences in voice use characteristics and workcontexts between singers and nonsinger professionals. Certain studies have used this binary classification to report differences between professional voice users [52][53][54].
As an alternative construct, professional voice users are distinguished from occupational voice users in a new textbook Voice Ergonomics: Occupational and Professional Voice Care: an excellent resource for voice teams [9 && ]. The authors define professional voice users as those who have a need for a skillful voice as distinct from occupational voice-users 'who need a lot of voice and often must use a loud voice' (such as the teachers and sports coaches described in the previous section). They further separate this group from active voice users who use their voice during a working day but without regularly raised intensity (e.g. telemarketers and health workers) [9
&&
]. This proposed classification system is novel and provides interesting criteria for delineating the different vocal loads, work characteristics and phonatory needs.
The professional vocalist or working vocal artist is perhaps historically one of the most recognized 'at risk' professional voice user for the development of phonotraumatic lesions. However, employmentrelated prevalence figures for singers and actors are confounded by huge heterogeneity across and among these voice users in environmental and person variables such as type of voice use, performance environment, music genre, repertoire, context, vocal expectations and voice training. Other difficulties are the reliance on treatment-seeking populations, the inclusion of amateur performers and that many studies do not specify whether performance is the primary occupation.
Despite the dearth of epidemiologic studies, further valuable insights have been provided over the past 18 months regarding vocal health, voice demands, laryngology findings and treatment options among specific performer groups such as elite award-winning performers [55 & ], Broadway singers [56 && ], opera singers [57], theater singers [58 & ], theater actors [59], and singers of specific cultural music styles such as Carnatic [60], Korean classical [61] and Fado singers [62]. Weekly et al. [63 && ] conducted a global survey of an impressive number of amateur and professional voice-users (n ¼ 1195) on their vocal health practices and included both speaking and nonspeaking voice users. They found a third of respondents did not access medical care due to insurance or financial constraints. This suggests treatment-seeking populations may be an under-representation of the number of working vocalists with voice disorders. [22], reduced respiratory [73] or cardiopulmonary function [74 & ] and shyness [75]. Table 1 displays ergonomic and person-factor influences on vocal health.
RISK MEASUREMENT AND VOICE ERGONOMICS
A previously unreported proactive Australian voice care program, conducted for performers in a large-scale production known as Santa's Kingdom 2004, showed performance vocal load can be less important than other work-related factors (Phyland, unpublished). Performers (n ¼ 210) involved in this interactive exhibition worked intensively for the 4 weeks prior to Christmas in loud performance/ activity stations around a large exhibition space. All underwent vocal screening baselines, vocal health education and end-of-production voice assessments. Significant short-term deterioration in vocal function was found for 151 (72%) of the performers on self-report surveys and perceptual and acoustic evaluation, although there were no ongoing concerns after the production conclusion. Of great interest was the finding that even those with no or little speaking or singing performance (e.g. polar bear characters who were mute and fully suited) still demonstrated significant acute vocal change. Vocal fatigue was attributed by many performers to an intensive work timetable and highly social 'extra-curricular' culture, rather than inherent occupational vocal demands.
Although the identification and measurement of 'at risk' behaviors and influences has advanced, the measurement of direct positive and negative impact of these factors on the vocal health of workers is not straightforward. Proving causation of work-related voice disorders is perhaps easier for acute injuries (such as vocal hemorrhage) than chronic voice disorders. Undertaking baseline vocal assessments and regular screening are important for tracking potential voice changes and as points for comparison [49,68,71,[76][77][78][79][80]. It is important to understand normal fluctuations in vocal function across the working hours and days, and what symptoms (including fatigue), durations and severities constitute critical threshold points for development of voice disorders [41,46,69,[80][81][82] ]. The economic, logistic and psychological ramifications of a vocal injury can be dire for both employee and employer with cancelled shows, loss of audience support, and an inappropriate assumption of poor vocal technique leading to a stigma and reduced future employment prospects for the injured performer. Fortunately, with increased understanding of the etiological factors in vocal injuries focus is changing to a commitment to provide prevention and risk management programs across many different voice-user groups [10,87 && ,88,89 & ,90-92]. There is emergence of proactive occupational health practices. It is difficult to get direct evidence of the efficacy of industry-funded programs due to sensitivity of data and methodological limitations in the program designs, as most do not have research as the primary objective. However, voice habilitation and rehabilitation programs, particularly in teachers and performers, feature in the recent international literature, with a favouring of the term vocal health over hygiene to better represent the philosophical underpinnings. Programs include Fig. 1. www.co-otolaryngology.com Research into risk management of work-related vocal 'injuries' is thwarted by privacy protection and sensitivity of information related to both the employer and employee. Occupationally induced voice disorders are strongly represented in laryngology clinics and require comprehensive assessment (with stroboscopy as a standard of care) and expert understanding of the occupational context and its' potential relationship to the development, maintenance and recovery of voice disorders [ There is a need to further evaluate intervention outcomes, improve understanding of rehabilitation and to develop evidence-based criteria to determine performance fitness in relation to ability to meet vocal requirements (e.g. voice quality, strength, stamina, ease and reliability), across all work-contexts.
Aging workforce
With our understanding of workforce vocal challenges across occupations accumulating, there is a need to also consider other contributions to voice. Allen and Miles [96] provided a comprehensive summary of age-related changes to the voice and current evidence-based interventions as part of a Special Issue on Ageing in Speech, Language and Hearing. Our international trends of an aging workforce imply there will be a need to address the combination of presbyphonia and occupational voice use more frequently in the future. Research into aging and continued occupational voice use is critical for future-proofing our workforce [96,97].
Early onset professional voice use
It is not only adult vocalists that use their voice professionally -child performers also work with their voices especially within the entertainment industry (television, film and the music theater). Many of the shows introduced this century such as Billy Elliot The Musical, Matilda The Musical and School of Rock The Musical feature children as central to the plot and can even involve a greater number of children than adults in the cast (Fig. 2). The associated occupational voice demands can be heavy and there is an urgent need for research investigating the impact of this load on the development of the child performers' vocal folds and vocal function [98,99]. Unpublished data from Phyland's lab on the outcomes of a voice care program demonstrated child performers (n ¼ 194) working in professional musical theatre productions experienced no negative change in vocal function. Children can be highly resilient in managing heavy vocal load over lengthy production seasons with appropriate and expert vocal care but long term impact needs to be monitored and speech pathologists and laryngologists still need to advocate for optimal conditions for these children still undergoing laryngeal anatomical development.
CONCLUSION
Voice disorders are prevalent in specific occupational groups and there is an urgent need for research to support occupational voice health and risk measurement, prevention and intervention. Our understanding of vocal use, vocal load and vocal ergonomics (environmental and person influences) across different occupational groups is building. There is encouraging evidence supporting intervention programs for occupational voice users with a primary focus on teachers and increasingly including performers. Education and prevention programs are emerging. Large population-based studies are required with a focus on health and economic burden of occupational voice disorders. International occupational health and workforce legislation does not currently adequately acknowledge, prioritise or guide educational and preventive interventions. There is an urgent need to formally identify combined risk factor bundles or environments; quantify the potential threat that voice disorders pose to a safe and healthy workplace; reduce the expression of voice disorders and its concomitant occupational burden; and develop prevention, management and health promotion targeted toward optimal occupational vocal function. Financial support and sponsorship None.
Conflicts of interest
There are no conflicts of interest. | 2019-10-26T13:08:46.023Z | 2019-10-15T00:00:00.000 | {
"year": 2019,
"sha1": "c056f3eb3ce771d36905ad6d00544ff38b99b14a",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/co-otolaryngology/Fulltext/2019/12000/Occupational_voice_is_a_work_in_progress__active.4.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b062fb582fa4e5a9efbcfecf8087de0bb29c527",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259286362 | pes2o/s2orc | v3-fos-license | Targeting Ribonucleases with Small Molecules and Bifunctional Molecules
Ribonucleases (RNases) cleave and process RNAs, thereby regulating the biogenesis, metabolism, and degradation of coding and noncoding RNAs. Thus, small molecules targeting RNases have the potential to perturb RNA biology, and RNases have been studied as therapeutic targets of antibiotics, antivirals, and agents for autoimmune diseases and cancers. Additionally, the recent advances in chemically induced proximity approaches have led to the discovery of bifunctional molecules that target RNases to achieve RNA degradation or inhibit RNA processing. Here, we summarize the efforts that have been made to discover small-molecule inhibitors and activators targeting bacterial, viral, and human RNases. We also highlight the emerging examples of RNase-targeting bifunctional molecules and discuss the trends in developing such molecules for both biological and therapeutic applications.
■ INTRODUCTION
Ribonucleases (RNases) are RNA-cleaving proteins that regulate the metabolism of RNAs.The two main classes of RNases are endoribonucleases and exoribonucleases.−3 Distinction of these processes relies on the differences in RNA-substrate sequence of RNases and specificity for single-or double-stranded RNAs.RNases either function in a processive manner or cleave their substrate only once after binding. 4The nucleases are master regulators of RNA-dependent pathways and play indispensable roles in RNA biogenesis. 5,6Therefore, small molecules modulating RNase activities are useful tools for studying the regulatory mechanisms involving both human and pathogenic RNases and are potential candidates for the development of therapeutics.Representative examples of such small molecules include inhibitors of viral RNases influenza polymerase acidic (PA) endonuclease and human immunodeficiency virus (HIV) RNase H, and several human RNases.The potential of RNases as therapeutic targets was discussed in a previous opinion article; 1 however, a review summarizing the scattered examples of these RNase-targeting small molecules is currently missing.To note, in addition to small molecules, RNases have been targeted by the emerging class of proximity-inducing bifunctional molecules for various chemical biology applications.Therefore, in this Review, we highlight two aspects of the current landscape in RNase targeting: small molecules for the development of therapeutics and bifunctional molecules that modulate RNA biogenesis and metabolism (Figure 1).The former aspect is divided into three target families, bacterial RNases, viral RNases, and human RNases.We also discuss trends and future directions to explore RNases as drug targets for small and bifunctional molecules.
■ SMALL-MOLECULE INHIBITORS OF BACTERIAL RNASES RNase P. Bacterial ribonuclease P (RNase P) is a unique RNase since it is a ribonucleoprotein functioning as a ribozyme.RNase P catalyzes the maturation of tRNA (tRNA) by the cleavage of the 5′ end of precursor tRNA.Inhibitors of RNase P enzyme function, which is crucial for bacterial survival, could potentially serve as antibiotics.The only FDA-approved inhibitors of RNase P to date are antibiotic aminoglycosides such as neomycin.To note, RNase P inhibition is, however, not the sole mechanism of action for aminoglycoside antibiotics since their interaction with rRNA is known to cause errors in translation. 7The development of fluorescence real-time tRNA cleavage assays enabled increased throughput screening for inhibitors. 8,9A fluorescence polarization assay using a fluorophore at the 5′end of the precursor tRNA confirmed neomycin B as an RNase P inhibitor (Figure 2).A screening of 2880 compounds discovered iriginol hexaacetate that inhibited RNase P with an IC 50 of 0.8 μM (Figure 2). 8A Forster resonance energy transfer (FRET) assay-based screening retrieved purpurin as an RNase P inhibitor (Figure 2).Purpurin bound to RNase P with a K D of 13 μM and its binding to the protein component of the enzyme was confirmed by crystallization of a purpurin-RNase P complex. 9Purpurin carries a three-oxygen pharmacophore that was often observed to coordinate the divalent metal ions of RNases. 9,10However, data on the antibacterial effect of purpurin was not reported. 9Additionally, RNPA2000 was reported to show weak RNase P inhibitory activity (IC 50 : 125 μM, Figure 2). 11A concern for the identified RNase P inhibitors, including RNPA2000, iriginol hexaacetate, and purpurin, was that they acted as aggregators and are likely unspecific RNase P inhibitors. 12,13In addition to small molecules, a FRET assay was employed to evaluate rationally designed and modified oligonucleotides as inhibitors of RNase P. Antisense oligonucleotides targeting the RNA component of RNase P were coupled to cell-penetrating peptides to yield conjugates that inhibited bacterial growth.The best-performed conjugates showed IC 50 values of ∼100 nM, being the most potent RNase P inhibitors reported to date. 14Nase E. RNase E is another bacterial RNase that functions as an endonuclease involved in ribosomal maturation and RNA turnover.RNase E is a central component in the formation of the degradosome complex in E. coli, and small-molecule inhibitors were identified with the aim of studying the cellular functions of RNase E. A virtual screening predicted smallmolecule binders of RNase E, which were evaluated by biochemical assays and SPR, albeit with weak millimolar activities.15 In summary, although inhibitors against the bacterial RNases such as RNase P and E are potential antibiotics, no potent and selective small-molecule inhibitors have been described so far.Even for the reported inhibitors listed here, their effects in inhibiting bacterial growth mainly were not reported or investigated.
VIRAL RNASES
−18 RNase H. RNase H is part of viral reverse transcriptase that removes RNA template strands to allow the synthesis of double-stranded DNA from viral genomic RNA, leading to the incorporation of the double-stranded DNA into the host genome by integrases. 19,20To date, HIV is treated by a combination of drugs in highly active antiretroviral therapy (HAART) and all HIV enzymatic activities except RNase H can be targeted effectively. 21,22Despite numerous studies aiming to identify RNase H inhibitors, current FDA-approved HIV-1 reverse transcriptase inhibitors all target the polymerase activity, but not the RNase H activity of the enzyme. 23,20Nase H has been extensively studied as a drug target already since the 1990s, as reviewed recently by Tramontano et al. 16,24−26 Screenings for RNase H inhibitors were historically performed by denaturing gel-based assays with isotope-labeled RNA substrates in limited throughput.24,25,27 More recent studies used fluorescent assays to identify active compounds, e.g., a FRET assay using an RNA-DNA hybrid in which the DNA was labeled at both ends with a fluorophore quencher pair.Hydrolysis of the RNA led to DNA-hairpin formation and induced fluorescence quenching.28 Another assay used a fluorophore-labeled DNA annealed with a quencher-labeled RNA, leading to a fluorescent signal upon RNA cleavage.29 In general, three classes of RNase H inhibitors were described, metal-chelating active-site binders, allosteric inhibitors, and dual inhibitors against both reverse transcription and RNase activities. 16 Active-site binding molecules mostly carry a three-oxygen pharmacophore that allows chelation of the magnesium ions of the DEDD (Asp-Glu-Asp-Asp) family endonuclease RNase H. Small molecules with diverse scaffolds have been identified as active-site inhibitors and evaluated in structure−activity relationship studies to optimize affinities (Figure 3). Mot molecules were reported to show low micromolar IC 50 values in biochemical assays, with some showing nanomolar potency.16 Several molecules, such as the pyrimidinol carboxylic acid 11 that showed an IC 50 of 0.18 μM against HIV RNase H domain did not inhibit viral replication in HIV infectivity assays.30 Complex structures of HIV RNase H domain and small molecules have been solved and clearly showed the binding mode involving the three-oxygen pharmacophore interacting with the metal ions.16,31−33 The RNase H domain active site is a shallow pocket difficult to be drugged without metal chelation. 16 Activ-site analysis revealed homology to the catalytic center of HIV integrase that contains two divalent metal ions and is responsible for inserting viral DNA into host genomic DNA.34 Modification of integrase inhibitors led to the identification of dual inhibitors of RNase H and HIV integrase, such as one of the first described RNase H inhibitors, 4-[5-(benzoylamino)thien-2-yl]-2,4-dioxobutanoic acid (BTDBA).35,36 Dual inhibitors against two enzymatic functions of the same virus boosted the activity of the molecules against HIV replication and thus were further investigated.36 One of the most potent HIV RNase H domain inhibitors reported to date is the N-hydroxypyrimidinedione 45 with an IC 50 of 25 nM against RNase H and an IC 50 of 21 nM against HIV integrase.Compound 45 showed an EC 50 of 15 nM in an antiviral replication assay.37 Another promising RNase H inhibitor is the N-hydroxypyrimidinedione 13j with an IC 50 of 5 nM against RNase H, ∼1000-fold less activity against HIV integrase (IC 50 4 μM), and an EC 50 of 7.7 μM in the antiviral replication assay.The large difference in biochemical and cellular activities could be explained by high substrate abundance and the fact that small molecules could only bind to the RNase H domain while substrate binding is highly dependent on its interaction with the polymerase domain.Thus, it is a challenge for small molecules to outcompete the substrates to achieve potent inhibition.38,39 Active-site inhibitors of HIV RNase H were tested for their inhibitory activity against HBV RNase H, given that the HIV and HBV RNase H share 23% amino acid sequence homology.Among compounds that showed moderate activity, the Nhydroxypyridinediones were investigated in a further SAR study.28,40 Other reported HBV RNase H inhibitors either showed weak inhibitory potency or suffered from cytotoxicity.40−42 While active-site RNase H inhibitors all carry a chelatingtriad pharmacophore, inhibitors without this motif were characterized by crystallography and NMR as allosteric inhibitors.Among them was the acylhydrazone BHMP03 (IC 50 0.4 μM) binding to the substrate handle region of RNase H. 16 Vinylogous ureas such as NSC727447 were identified as allosteric RNase H inhibitors by screening 230,000 natural compounds and led to the development of 3′,4′-dihydroxyphenyl-containing thienopyrimidinones (compound 9 and analogues) with submicromolar activity in biochemical assays and low micromolar activities in antiviral replication assays.16,43 Further SAR studies led to the identification of benzothienooxazinone compound 22 as a dual inhibitor of RNase H and reverse transcriptase activities of the HIV enzyme with IC 50 values of 0.53 and 2.90 μM, respectively. 44Other dual inhibitors were based on the structure of the dihydroxy benzoyl naphthyl hydrazone DHBNH but did not show improved activity in comparison with that of compound 22. 45,46 Nsp14 and Nsp15.Recently, the first inhibitors targeting SARS-CoV-2 ribonucleases nsp14 and nsp15 were reported.Nsp15 is an endoribonuclease cleaving RNA 3′ of uridines to prevent host recognition of the virus.Virtual screening of approved drugs and drugs under investigation for approval identified dutasteride and tasosartan as inhibitors of nsp15, but biochemical assays revealed only 40% inhibition at 600 μM concentrations.9 Uracil derivative tipiracil is a low-micromolar (IC 50 7.5 μM) nsp15 inhibitor with antiviral activity.50 Nsp14 is a proofreading exonuclease enhancing replication fidelity, rendering nucleoside analogues ineffective as drugs against SARS-CoV-2.Nsp14 inhibition was proposed as a strategy to enable nucleoside analogues to be active. 18First reports of nsp14 drug discovery efforts identified micromolar inhibitors from a FRET assay using fluorophore-labeled annealed RNA strands as a probe.Nsp14 was identified to belong to the DEDD superfamily of ribonucleases harboring divalent metal ions.51 Thus, it is unsurprising that one of the identified inhibitors, compound 79, harbored the chelating triad pharmacophore as described for RNase H inhibitors (4).52 A SAMDI assay screening of 10,240 small molecules detecting RNA-cleavage by mass spectrometry led to the identification of an nsp14 inhibitor with an IC 50 of 5.7 μM.53 Fragments with potential allosteric binding mode were discovered by fragment screening and could serve as a starting point for future nsp14 inhibitors.54 PA Endonuclease.Another well-studied viral nuclease target is the influenza virus PA protein possessing endonuclease activity that was reported to be active against RNA and DNA substrates.It cleaves host-cell mRNA caps that are then used as primers for the transcription of viral mRNA.55,56 The enzymatic function is crucial for viral replication, which makes the influenza PA endonuclease a prime target for antiviral therapeutics.Like RNase H, PA endonuclease has two crucial divalent metal ions (Mn 2+ or Mg 2+ ) in its active site.56 Therefore, structures of reported inhibitors of influenza PA endonuclease are similar to inhibitors of viral RNase H. Nearly all inhibitors harbor a three-oxygen metal-chelating pharmacophore combined with hydrophobic moieties (Figure 4A). 57The metal-chelating pharmacophore for inhibition of the capsnatching endonuclease was identified via screening already in the 1990s.One identified inhibitor was L-735,882 which inhibited influenza endonuclease with an IC 50 of 1.1 μM.58 SAR exploration improved the activities (IC 50 < 1 μM).57 Recently, a fragment-based approach starting with metalchelating compound 1 led to the discovery of compound 23 which displayed an IC 50 of 47 pm.However, the measured EC 50 of compound 23 in the antiviral replication assay was in the low micromolar range. 59The most successful PA endonuclease inhibitor is baloxavir marboxil, which the US FDA approved in 2018 for the treatment of the influenza virus.Baloxavir was developed based on a rational design from the approved HIV integrase inhibitor dolutegravir (Figure 4), as the HIV integrase active site shares structural similarity with influenza PA. 60,61 Baloxavir marboxil is a prodrug derived from the potent inhibitor baloxavir acid by shielding the hydroxyl group to increase bioavailability.60,62,63 Baloxavir acid had single-digit nanomolar or subnanomolar potency against 22 different influenza A and B strains and showed an improved effect in clinical trials compared with the results induced by the treatment of the neuraminidase inhibitor oseltamivir.64 Taken together, viral nucleases have been studied as antiviral drug targets for more than three decades.The influenza endonuclease inhibitor baloxavir is a representative example that has been approved for antiviral clinical usage. Besides influenza endonuclease, small molecules have been reported for the HIV RNase H, but still with limited potency in inhibiting virus replication, so RNase H remains the only HIV enzymatic function that has not been successfully addressed clinically. Th Covid-19 pandemic promoted the study of RNases of SARS-CoV-2, and emerging inhibitors targeting nsp 14 and nsp15 are being reported.■ SMALL MOLECULES TARGETING HUMAN RIBONUCLEASES Human ribonucleases are essential effectors in various cellular pathways responsible for immune responses, apoptosis, and inflammation.Thus, the modulation of human RNases with small molecules has been studied as a promising strategy for developing therapeutics and various biological applications. Fo example, RNase A family ribonucleases RNase 1 and Angiogenin are secretory enzymes involved in host defense, tightly regulated by RNase inhibitor protein.65,66 First reported RNase A family inhibitors were derived from nucleotide structures and showed nanomolar activities, followed by a few small-molecule inhibitors with moderate activities.67−69 Dicer. Th RNases Drosha and Dicer are central regulators of the maturation of noncoding RNAs.The enzymes specifically cleave primary and precursor miRNA transcripts to generate mature miRNAs.Both Drosha and Dicer are conserved processors of canonical miRNAs.No small-molecule inhibitor of Drosha has been reported to date.Small-molecule modulators of Dicer activity allow studying Dicer biology or serve as starting points for the development of anticancer agents.To identify Dicer inhibitors, a fluorescence quenching assay using double-stranded fluorophore-and quencherlabeled RNA substrate was developed to monitor Dicer activity, which identified the aminoglycoside kanamycin that inhibited Dicer processing by 40% at a concentration of 100 μM.70 Caf1 and PARN. Riboncleases involved in mRNA posttranscriptional regulation and turnover are mRNA deadenylases such as Caf1/CNOT7, which is part of the deadenylating Ccr4-Not complex, and Poly(A)-specific ribonuclease (PARN).Both enzymes have two divalent metal ions in the catalytic site coordinated by a DEDD motif.Caf1 inhibitors were discovered to develop probes to assist the biological understanding of the RNase.A FRET assay employing a fluorophore-labeled RNA probe was applied. Th RNA probe hybridized with a fluorescently labeled DNA when it was not cleaved by Caf1.71 The most potent Caf1 inhibitors with low micromolar to submicromolar IC 50 values were inspired by HIV RNase H inhibitors and harbor the metal chelating three-oxygen pharmacophore, such as compound 8j that also showed activity against PARN (Figure 5).72 Other published PARN inhibitors are mainly nucleoside analogues and aminoglycosides.73−76 RNase H2.The RNase H2 is a DEDD superfamily metalloenzyme that cleaves the RNA strand of DNA−RNA duplexes.77 RNase H dysregulation in humans is associated with the genetic autoimmune disease Aicardi-Goutieres syndrome (AGS) that causes neurological dysfunction early during infancy.78 A screening of 47,520 compounds employing a fluorescent assay using fluorophore-and quencher-labeled RNA-DNA duplex substrates led to the identification of isothiazolidinone R11/ebsulfur with an IC 50 of 0.02 μM against RNase H2 (Figure 5). 79RNase L. The latent ribonuclease (RNase L) is another human RNase implicated in AGS.RNase L is endogenously expressed in human cells and is central to innate immune and antiviral responses.3 The oligoadenylate synthase recognizes and binds double-stranded RNA upon viral infections and produces 2′-5′-linked oligoadenylates (2′-5′A).80 2′-5′A bind to RNase L to induce dimerization to form the catalytically active, dimeric form of RNase L. 81 Activated RNase L cleaves host and viral single-stranded RNA leading to a global translational arrest.82,83 RNase L consists of an N-terminal ankyrin repeat domain which is mainly responsible for 2′-5′A binding, a catalytically inactive kinase domain that binds ATP, and a C-terminal ribonuclease domain that cleaves the substrate RNA upon dimerization.81 Both small-molecule inhibitors and activators of RNase L have been reported (Figure 6).Inhibitors could serve as candidates for the treatment of AGS, while activators were developed as antiviral compounds.84,85 The kinase domain of RNase L shares high sequence homology with that of dsRNA-dependent protein kinase R (PKR) and inositol-requiring enzyme 1α (IRE1), and therefore, reported RNase L inhibitors share structural similarity of small-molecule kinase inhibitors that target the kinase domain, such as the FDA-approved receptor tyrosine kinase inhibitor sunitinib.86 Sunitinib was tested with varied IC 50 values against RNase L ranging from 1.4 to 33 μM and showed an enhanced effect of oncolytic viruses against tumors in mouse models.87−89 The complex structure of sunitinib and RNase L confirmed the binding at the kinase domain and suggested dimer destabilization as the inhibition mechanism.Exchange of the fluorine substituent to chlorine improved the inhibitory activity by ∼4 fold.89 Screenings of 500 kinase inhibitors and 840 fragments using a FRET-based assay employing a single-stranded RNA probe labeled with a fluorophore quencher pair resulted in the identification of ellagic acid (IC 50 73.5 nM) and hyperoside (IC 50 1.63 μM) as RNase L inhibitors.84,90 The discovery of RNase L activators was based on similar FRET assays used to evaluate inhibitors, albeit without the addition of the natural activator 2′-5′A.91−94 First, thiophenone C1 and thienopyrimidinone C2 were retrieved as RNase L activators out of a library of 30,000 small molecules with EC 50 of 26 and 22 μM, respectively, with a proposed activating mechanism by binding to the 2′-5′Abinding site.91 A more recent study optimized the thiophenone scaffold of C1 and yielded compound C1−3 that showed 48% activation at 130 μM compared with 2′-5′A at 100 nM.92 Combination of the 2-aminothiopheonone scaffold of C1−3 with that of the pyrrole scaffold of sunitinib led to RNase L inhibitors with a hybridized scaffold.95 An extensive SAR study focusing on the aminothiophene core scaffold and a screening of 240,000 small molecules resulted in small-molecule binders that stabilized RNase L but without significant improvement on the RNase L activating potency.93,94 Very recently, compound 2 was identified by DNA-encoded library-based screening as an RNase L activator that induced RNase L dimerization in micromolar potency.96 IRE1.The serine/threonine-protein kinase/endoribonuclease IRE1 regulates the unfolded protein response located on the membrane of the endoplasmic reticulum (ER).84 IRE1 consists of an N-terminal ER-lumenal domain involved in the unfolded-protein detection, a transmembrane region, a kinase domain, and an RNase domain.97 Kinase and RNase domains are structurally related to those of RNase L, and the RNase domain fold is distinct from other proteins (Figure 6).98 Unfolded proteins induce dimerization of IRE1 and downstream signaling, including autophosphorylation and RNase activation. 97,99ctivated RNase cleaves ER-bound RNA for its decay and specifically cuts X-box binding protein-1 (XBP1) mRNA in a nonconventional splicing mechanism, leading to the expression of a potent transcription factor.86,100 One reported downstream target of spliced XBP1 is the oncogene MYC.Thus, IRE1 activity is associated with cancer progression, aggressiveness, and poor prognosis.101−104 In addition to being studied as an anticancer target, IRE1 has also been implicated in diabetes and angiogenesis regulation.105,106 IRE1-targeting modulators were developed against the kinase and RNase domains (Figure 6).86 Similar to RNase L, reported kinase inhibitors were repurposed for IRE1 inhibition.97,107 Interestingly, some kinase inhibitors did not inhibit RNase function but activated IRE1 RNase activity instead.0,111 Assays for the identification of IRE1 modulators comprise splicing assays, fluorescence-based cleavage assays using fluorophore-and quencher-labeled XBP1 RNA substrate, and phosphorylation assays that focused on kinase activity instead of RNase activity.107,109,112,113 IRE1 inhibitor imidazo[1,5-α]pyrazine-8-amine compound 3 was identified based on screening of known type II kinase inhibitors, and further modification led to the development of compounds KIRA6 and KIRA7.107,109,114 Both KIRA6 and KIRA7 were potent inhibitors in biochemical and cellular evaluations, but a photoaffinity labeling approach revealed low selectivity of the imidazo[1,5-α]pyrazine-8-amine scaffold toward IRE1.Compound 31 was further identified as an IRE1-selective inhibitor (IC 50 80 nM) that stabilized the inactive kinase conformation allosterically.118 Unfolded protein response inhibitor UPRM8 is a covalent inhibitor targeting the IRE1 kinase domain of IRE1 by reacting to a conserved cysteine residue in the active site.119 In contrast to the type II kinase inhibitors, type I kinase inhibitors activated the RNase domain of IRE1.Examples of such activators with poor selectivity include the receptor tyrosine kinase inhibitor sunitinib and the promiscuous kinase inhibitor APY29.97 In contrast, the pyrazolopyridine compound G-1749 activated unphosphorylated IRE1 via the modulation of the activation loop with an EC 50 below 0.1 μM and showed a favorable selectivity profile.108 The RNase domain of IRE1 was directly targeted by covalent modifiers of lysine 907, such as hydroxy-arylaldehydes represented by compounds MKC9989.120 The lysine residue is not present in the same position in RNase L, making such Lysine-based covalent-targeting approach selective for IRE1.Furthermore, a docking study addressing the dimer interface of the RNase domain identified neomycin (IC 50 0.33 μM) as an IRE1 inhibitor.112 ■ BIFUNCTIONAL MOLECULES TARGETING RNASES Bifunctional Dicer Inhibitors.An emerging approach to target ribonucleases is the design of bifunctional molecules to activate, bind, or inhibit an RNase in a specific context (Figure 7).The metal-dependent RNase Dicer is involved in the maturation of noncoding miRNAs. Biunctional Dicer inhibitors allow specific inhibition of processing of a selected target RNA, avoiding affecting all canonically generated miRNAs by Dicer inhibition.The bifunctional inhibitors consist of an RNA-binding molecule and a weak Dicer inhibitor.121−124 The reported Dicer inhibitors were derived from pharmacophores of other endoribonuclease III inhibitors initially developed against influenza endonuclease and harbor the typical metal-chelating three oxygen pharmacophore.125 Aminoglycosides neomycin and kanamycin were employed as the RNA-binding molecules to target the oncogenic miRNA miR-21.122,123 The specificity of such bifunctional Dicer inhibitors was increased using antisense oligonucleotides, which alone did not inhibit the processing of pre-miR-21 by Dicer.121 A following study showed the possibility of incorporating a light-cleavable linker in the bifunctional molecule to deactivate the inhibitory activity by light.124 PROTAC. In geeral, the primary type of proximityinducing bifunctional molecules is proteolysis targeting chimeras (PROTACs) that have gained substantial attention in the past decade to achieve degradation of protein targets of interest via ubiquitination and proteasome-mediate degradation pathway.126 Until now, no PROTAC that degrades an RNase has been developed, while a screening hit APL-16-5 against influenza PA endonuclease was confirmed to have a PROTAC-like mechanism.APL-16-5 is a microbial metabolite of Aspergillus sp.CPCC 400735 with an EC 50 of 0.28 μM in antiviral assays that was shown to prevent the lethality of influenza infections in mice.Evaluation of its mode of action revealed the gluing mechanism of binding to the PA endonuclease and the E3-ligase TRIM25 to induce ubiquitination and thus endonuclease degradation.127 RIBOTACs.In comparison to the protein-degrading PROTACs, proximity-inducing bifunctional molecules have been reported by the Disney lab to induce targeted degradation of RNAs via the recruitment of RNase L, i.e., by ribonuclease targeting chimeras (RIBOTACs).So far, RNAs with different structured elements have been successfully degraded via RIBOTACs that recruit RNase L. 92,96,128−134 The pioneering type of RIBOTACs used 2′-5′-linked oligoadenylates, the natural RNase L activator, as the RNase L recruiting component to be coupled to a dimeric miR-96 binder.Small-molecule-based RIBOTACs using the aminothiophenone compound C1-3 as the RNase L recruiter were then reported to induce miR-21 degradation and led to reduced metastasis in mice breast cancer models.92 The same RNase L recruiter was used for the design of RIBOTACs degrading oncogenic miR-17, miR-18a, and miR-20a cluster, degrading SARS-CoV-2 attenuator hairpin involved in frameshifting of the ribosome (C5-RIBOTAC), for isoformspecific targeting of the mRNA of quiescin sulfhydryl oxidase 1 isoform a, and degrading an expanded G4C2 RNA repeat which is associated with amyotrophic lateral sclerosis and frontotemporal dementia.[129][130][131]133 Small-molecule RNA binders that do not convey biologically active interactions were conjugated to the same aminothiophenone RNase L recruiter to form RIBOTAC degraders targeting oncogenic pre-miR-155, JUN mRNA and MYC mRNA.134 Recently, a biphenyl RNase L activator identified via screening of a DNAencoded library was used in building the dovitinib-RIBOTAC 7 that degraded miR-21 and deactivated a miR-21-mediated cancer circuit in MDA-MB-231 cells.96 Apart from RNase L recruiters, bifunctional molecules with a chemical degrader of RNA including imidazole or a bleomycin derivative were developed.135−137 Both small-molecule RNase L recruiters and the Dicer inhibitor that were used as the building blocks for the bifunctional molecules are weak activators or inhibitors of their target RNases.The bifunctional molecule approach allows the local increase of RNase concentration or a local modulation of the RNase, affecting only the target RNA of interest.
■ SUMMARY AND PERSPECTIVES
RNases are ubiquitous RNA-cleaving and modifying proteins that regulate RNA biology and metabolism.RNases play essential roles in various cellular functions, including immune responses, antiviral pathways, apoptosis, and inflammation reactions.Therefore, RNases have been the targets for the development of small-molecule modulators for both biological and therapeutic applications.To date, small-molecule inhibitors of bacterial, viral, and human RNases have been reported, together with limited examples of small-molecule activators targeting a few selected human RNases.Successful examples, such as the approved agent baloxavir marboxil that inhibits the influenza PA endonuclease indicate the potential to develop RNase-targeting small molecules for therapeutic purposes.Most addressed ribonucleases are metalloenzymes, and thus, the three-oxygen pharmacophore discussed in the abovementioned metal chelating inhibitors is highly abundant.The prodrug approach of baloxavir marboxil allows for higher cellular availability and thus provides a feasible approach to yield molecules with improved effectivity.The prodrug design to shield the three-oxygen pharmacophore of many RNasetargeting small molecules can potentially improve the limited cellular activity of many reported inhibitors and will be useful for discovering further optimized molecules.It is noteworthy that many reported RNase-targeting small molecules feature structures that may interfere with assay readouts, 13 the same applies for small-molecule inhibitors of RNA-binding proteins in general. 138Therefore, careful validations via orthogonal assays and evaluations on the targeting profile and mechanisms of inhibition/activation are needed to identify robust molecules to be studied as either drug candidates or useful probes.In addition to synthetic small molecules, aminoglycosides such as neomycin were repeatedly described as inhibitors against a broad range of different RNases with generally low activity and selectivity, and aminoglycosides are also known to bind to RNAs.RNase L is one of the few RNase targets for which both inhibitors and activators have been reported by targeting different structural domains.Covalent inhibitors binding to the ribonuclease or adjacent domains were reported for IRE1.−141 Of note, cytotoxic RNases such as Ranpirnase and Barnase have also been studied for their direct use as anticancer agents in gene therapy. 142,143A new perspective in the field is the emerging class of bifunctional molecules that target RNases Dicer and RNase L for specific inhibition or activation of the RNases, achieving either inhibition of RNA biogenesis or induced RNA degradation.Concerning the biological relevance of RNases and their close associations with pathological states, it is reasonable to expect that more RNases will be subject both as the protein targets of interest for bifunctional molecules and as the effector protein components to be utilized by bifunctional molecules.These RNase-targeting molecules will contribute to probe an improved understanding of RNase biology at both the transcriptional and translational levels and provide new perspectives in developing small-molecule-based therapeutics.
KEYWORDS
Bifunctional molecule, a molecule with two functional entities.Synthetic heterobifunctional molecules often incorporate a linker structure between the two functional entities (e.g., two small molecules) interacting with two different biomacromolecules or domains; DEDD motif, a common motif in nucleases consists of the four acidic amino acids Asp, Glu, Asp, and Asp, coordinating two divalent metal ions that coordinate the substrate phosphate in the active site of the nuclease; noncoding RNAs, RNA molecules that are not translated into proteins but usually play important regulatory roles in various biological processes and diverse cellular activities; PROTAC, proteolysis targeting chimeras are bifunctional molecules coupling a protein-binding moiety to an E3ligase binder targeting a protein of interest for degradation; Proximity-inducing molecule, a molecule that induces posttranslational modifications or temporal control of biological processes by bringing two biomarcomolecules (that would not normally interact) in close proximity; Ribonuclease, an enzyme that cleaves the phosphodiester bonds of ribonucleic acids; Ribonuclease L (RNase L), an antiviral interferon-induced ribonuclease that is activated depending on 2′-5′ oligoadenylates binding, resulting in cleavage and degradation of cellular RNAs; RIBOTAC, ribonuclease targeting chimeras are bifunctional molecules consisting of a functional unit that binds RNase L linked to an RNA-binding moiety.The RNase is recruited to the RNA of interest, inducing specific degradation of the RNA
Figure 1 .
Figure 1.Current RNase-targeting strategies using small molecules and bifunctional molecules.(A) Small-molecule modulators include inhibitors that bind to the RNase active site, inhibitors and activators that bind to allosteric sites, or binders to adjacent domains.(B) Described bifunctional modulators enable targeted RNA degradation by the recruitment of RNase L or inhibition of Dicer-mediated RNA cleavage by coupling a weak Dicer inhibitor to an RNA binder.
Figure 2 .
Figure 2. Small-molecule inhibitors of bacterial RNase P (the red color indicates the metal-chelating pharmacophore of purpurin).
Figure 3 .
Figure 3. Structures of inhibitors of HIV RNase (the red color indicates metal-chelating pharmacophore).
Figure 4 .
Figure 4. (A) Structure of small-molecule inhibitors of nsp14 and nsp15 of SARS-CoV-2 and PA endonuclease.The red-colored structure indicates the metal-chelating pharmacophore.(B) Crystal structure of baloxavir acid bound to the PA endonuclease of influenza A virus showing the interaction of the compound with the metal ions (top) and a surface representation (bottom) (PDB 6FS6).
Figure 5 .
Figure 5. Structures of inhibitors of human RNases Caf1 and RNase H2 (the red color indicates the metal-chelating pharmacophore).
Figure 6 .
Figure 6.Small molecules targeting human RNases.RNase L inhibitors (gray background), RNase L activators (green background), and IRE1 modulators (blue background).The full-length structure of RNase L is shown (PDB 4O1P) together with the kinase and RNase domains of IRE1 (PDB 6W3C).
Figure 7 .
Figure 7. Bifunctional molecules targeting RNases.General structures and representative examples of Dicer-targeting bifunctional molecules and RNA-targeting RIBOTACs, instead of all reported examples with full structures, were shown for clarity.ASO, Antisense oligonucleotide. | 2023-06-30T06:16:41.028Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "ad1f2c5969fc80d57ec89631c7c84bed99aca6de",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acschembio.3c00191",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2ee7d7a7378a317c9c0d60c249db514ea500b1b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
92422009 | pes2o/s2orc | v3-fos-license | Results of 10 years sampling of Chironomidae from German lowland running waters differing in degradation
Based on data from monitoring in north-east Germany (Land Brandenburg) over a decade, chironomid taxa and their abundances were analysed for their preferences for certain running water types and degradation levels. A detrended correspondence analysis (DCA) revealed that the distribution of the taxa was determined by both, degradation level and water body type. A series of taxa was positively or negatively correlated to degradation, however, frequently only in certain water body types. In other types, they played no major role or were evenly distributed along the degradation gradient. Higher correlations were found for taxa favouring ‘good’ and ‘poor’ or ‘bad’ conditions than for preferers of ‘moderate’ status. The results allow to validate indices and weights of indicators derived from high frequencies of the taxa in certain one or more quality class(es). The preferred occurrences elaborated have to be discussed or tested for plausibility before using or evaluating powerful indicator taxa for classifications or revisions in assessment practice. A reduction of the taxonomical precision to genus level lowered the statistical significance and requires careful examination of the species preferences of the genus respected.
INTRODUCTION
The implementation of the European Water Framework Directive (European Commission, 2000) requires a big number of samples of water organisms for monitoring the ecological states of water bodies of concern.Many data of the distributions and indication values of aquatic macroinvertebrates are won from different types of water bodies of different ecological conditions (see e.g.Ehlert et al., 2002).As a side effect, a surplus of these monitoring is growing knowledge and understanding on species biogeography and taxonomy, potentially.It is timeconsuming to recover this treasure, and financial support for this and highly qualified work is very rare.However, the evaluation of these data promises improvement of indicator powers and quality of assessments.
In other countries (e.g.The Netherlands, Austria) the midge larvae are integrated in the assessment routines.In Germany, however, they are included only on tribe level, to date.Thus, the knowledge of biogeographic the distribution of species is relatively restricted.Up to now, a comprehensive overview of the chironomid fauna is published only for the Land of Brandenburg, based on the monitoring activities of the last decade (Orendt et al., 2014).The authors found an enormous increase of species records from ca. 40 to 408 with a valid species status within few years representing 56% of the species and 73% of the genera of the German chironomida fauna.Basic analysis of species distributions in regional water body types found each water body type as an individual.Further analysis was required (Orendt et al., 2014).
In this paper, the distribution and preferences of chironomids along a degradation gradient are investigated.Earlier, Marzali et al. (2010) elaborated indicators for ecological status of streams independent from morphological conditions.A series of references address the relations between chironomid communities and their response to environmental factors and pollution mostly using multivariate methods (Armitage and Blackburn, 1985;Campbell et al., 2009;Mauad et al., 2017;Odume et al., 2015;Özkan et al., 2010;Popović et al., 2016).Here, however, taxa occurrences in degradations along a five-level gradient in several water body types are studied, but not their relations to single measured environmental factors as done e.g. by Ehlert et al. (2002) and in the papers mentioned.These types are classified a priori by unimpacted conditions, and serve as a reference in bio-assessment for the implementation of the European Water Framework Directive (European Commission, 2000).The environmental gradient used here is the degradation inferred by the non-chironomid Considering this, the study presented here undertakes an analysis to follow the questions i) whether the distribution of chironomid taxa can be explained better by the water body types (or groups of them) or by levels of degradation (i.e.quality classes), and ii) whether the distribution of the chironomids reflects the classification of the water body types specified and their degradation levels.The results should help to evaluate the ecological preferences of the taxa known so far (e.g. from Moller Pillot, 1984aPillot, , 1984bPillot, , 2009Pillot, , 2013;;Moller Pillot and Buskens, 1990) and to revise their qualifications as indicators in water assessments (e g. indices like in Moog and Hartmann, 2017).
AREA AND WATER BODIES INVESTIGATED
The Land of Brandenburg covers a part of the "Central European lowlands" with few places rising above 100 m asl.The sites sampled during the monitoring was situated in all parts of the region (see a map in Orendt et al., 2014).85% of the sites were sampled at least two and mostly three or more times.Sampling season was spring, in most cases.Some samples were taken in autumn, which is also accepted by the rules of the protocol (Meier et al., 2018), but not summer time, as the assessments results are biased by the absence of the emerged species providing for wrong results.Also Koperski (2009) found seasonal effects as a strong factor for differences in the taxonomic composition of a sample site.
The sites represented 11 running water types (Tab. 1) following the definition by Meier et al. (2018) and differing in their ecological status in five levels ('high', 'good', 'moderate', 'poor', 'bad') as specified by the EU-Water Framework Directive classification (European Commission, 2000) (Tab.1).This Directive requires 'good' quality for all water bodies of certain meaning referring to chemical, biological, and morphological criteria based on reference conditions of water body types.These conditions can be inferred as described by Hawkins and Vinson (2000) or, following closer to the Water Framework Directive, by Ehlert et al. (2002) and Sanchéz-Montoya et al. (2007) resulting in water body types and characteristic communities for each of these types.Furthermore, the Directive requires to assess the quality class for a water body respected.For water bodies with a status less than 'good', measures have to be undertaken in order to improve the quality.Biological conditions are assessed using phytobenthic, macrophytic, macro-invertebrate, and fish communities.This paper refers only to running water types and assessments deriving from macro-invertebrate communities.In the data used here, samples from the same sites differed in their ecological status, in some cases.
Data sources
The data used here derived from the macro-invertebrate database of the Land Brandenburg (Environmental Agency, LfU, Potsdam) obtained during 2004 to 2013 from monitoring.Generally, the data include larvae, pupae, pupal exuviae, and adults obtained from sampling procedures according to the standard procedure (Meier et al., 2018) for assessment according to Water Framework Directive.The procedure requires sampling from all present habitats of a site in quantitative relation to the share of their coverage resulting in a mixed sample of 20 subsamples.As a total, this should cover 1.25 m 2 surface over a stretch length of 50 m.Abundances were re-calculated to individuals m -2 .A minimum of 350 individuals (in Brandenburg 500) out of at least 1/6 of the whole sample were sorted and counted and the taxa identified.In the general standard, however, chironomids are respected only on tribe level, but the routines in Brandenburg require identification as precise as possible.In addition, adults were collected from the riparian vegetation, in some years.The database held records from 2396 samples at 896 sites from 201 water courses.Most of the material comprised larvae.The data concerning the ecological status of the sampled water bodies derived from assessments based on the whole macro-invertebrate communities (for classes, see Tab. 1).Lower ecological status derives mostly from morphological degradation of the water body sampled.The identifications are reliable, as all collegues involved had considerable experience with all developmental stages: Claus Orendt, Leipzig, Claus-Joachim Otto, Fahrenkrug, Berthold Janecek, Vienna, Xavier-Francois Garcia, Berlin, and Susanne Michiels, Emmendingen.Records were extracted from the database according to the taxonomic level identified.Nomenclature was revised referring to the recent version of the online database in Fauna Europaea (Saether and Spies, 2013) and Spies and Saether (2004), in which also all synonyms are listed and which has the highest priority in questions of nomenclature.
Data processing
The records of taxa in a sample were available in counts, or densities (individuals m -2 ) or presence/absence.For two reasons, all data were transformed in presence/absence in each sample: i) it was not possible to equalize the different sorts of abundances, and ii) high variations of original abundances or densities of the taxa in the samples tend to bias the constancy of a taxon in a water body type or a degradation class and may not support to follow the aim of the study.The data from adults and pupal exuviae were omitted as they were not collected consistently and, therefore, provided only scattered information leaving only larvae and pupae for further processing.The data (frequencies) from each sample were summarized for the different quality classes (Tab. 1) of each running water type leaving 46 variables (e.g.'type 11_high', type 11_good', etc.).In type 11 and type 14, some data in 'high' quality states were available (Tab. 1).From the other types, communities from only lower states were sampled.Then, in order to reduce statistical noise, dominant and evenly distributed (present in >39 variables), and rare (present in <3 variables) taxa were eliminated and species with similar ecological preferences summarized (e.g.Endochironomus tendens and E. albipennis summarized to Endochironomus sp., or all species of Ablabesmyia) leaving 250 taxa for the further analysis out of 432 taxa recorded altogether.To analyse the distribution of the taxa in certain quality classes in the water body types (from 'high' to 'bad'), a detrended correspondence analysis ('DCA'; Hammer, 2012) was performed.For this, the relative frequencies of the taxa in the different quality classes of each running water type were used, but not the original data values of densities.This transformation mirrors the shares of the taxa in the community and makes the part of a taxon better comparable between the sites than the original counts or frequencies.In a test run, an evaluation using original frequencies did not show substantial differences in the results, so the more realistic data were used further on.To study and show the pattern of degradation gradient in each water body type, the quality classes from 'high' to 'bad' were plotted against the scores of the first axis of the DCA.For relations of taxa along the gradient, their DCA scores (1 st axis) were correlated with the frequencies of each taxon and tested for significance using Spearman's rank correlation coefficient.
For the general distribution of taxa in quality classes, regardless of water body type, the frequencies in the different quality classes from all samples were summarized (see Annex 1).
Water body types and degradation classes
The species richness as a simple community descriptor indicated an increase from 'good' to 'poor' conditions, apparently, and then a drop to 'bad' conditions.However, due to inconsistent response in the different types, these differences were not statistically significant, except for the difference between 'poor' or 'moderate' and 'bad' conditions, respectively (p<0.05;Wilcoxon test; values from waters of 'high' were neglected, here, due to the restricted number of data).
The DCA revealed that the distribution of the taxa was determined by quality classes, on one hand, and to water body types, on the other hand.This is illustrated in Only the points for the 'high' states do not follow this shift pattern.In Fig. 1 (right panel), the data points of each water body type are marked with the same colour and symbols.In this view, also a shift in diagonal direction appears, but shows simply a clear separation of certain water body types from one and another, as no gradient was measured.The longish shape of most of the types is due to the shift direction from 'bad' to 'good' observed from the view in Fig. 1 (left panel).The first axis counts for 20.8% of the total variance and is much stronger than the 2 nd (7.4%) and following axes.A plot of the scores from the 1 st axis along the degradation classes (Fig. 2) illustrates that the chironomid communities reflect in many, but not in all water body types a gradient from higher to lower quality classes.Particularly in type 11, 14, and 21, the decline is not continuous.Also, the only data points for 'high' quality status in types 11 and 14 do not follow the general pattern.
To examine whether a grouping of some water body types makes the results more significant, a DCA was performed with only 'good' and 'high' sites.However, neither the grouping of types in proximity nor an additional grouping of small and large water body types tried arbitrarily were able to consolidate or refine the distribution patterns of the taxa in degradation levels.
Species preferences
The correlation of the frequencies of the taxa with the scores from the DCA 1st axis resulted in a list of stronger and weaker indicators in the gradient (Tab.2).Generally strong preferrers of 'worse' conditions in these water body types were e.g.Endochironomus sp., Glyptotendipes pallens, Polypedilum pedestre, G. paripes.Strong preferrers of 'moderate' or 'better' conditions were e. g.Brillia sp., Tvetenia discoloripes/verralli, Prodiamesa olivacea, Micropsectra notescens gr., Polypedilum cultellatum.However, the preferences were not consistent for each water body type (Fig. 3).This corresponds to the results from above (Fig. 2).
Some taxa with the highest correlations shall be presented as examples, here (Fig. 3).G. pallens was mostly found in 'poor' and 'bad' water bodies in type 12 (Midsized and large organic substrate-dominated rivers), whereas in type 20 (Very large sand dominated rivers) the species is more evenly frequent in water bodies of all degradation classes (similar to Endochironomus sp.).As a preferrer of better conditions, Tvetenia discoloripes/verralli is most frequent in water bodies of 'good' quality but only in type 11 (Small organic substrate-dominated rivers), obviously.Prodiamesa olivacea and Paratrissocladius excerptus (no chart) were found most frequent in water bodies of 'good' quality only in type 14 (Small sanddominated lowland rivers), whereas P. olivacea appeared with no greater differentiation in the other types, and P. excerptus in type 11 (Small organic substrate-dominated rivers), more frequent under 'worse' conditions.Only few species were recorded to prefer distinctly 'moderate' or 'poor' water bodies.Conchapelopia sp. was statistically strongly correlated with 'moderate' conditions, no matter of water body type, but also with not much lower frequencies than in the adjacent quality classes of some types.This suggests Conchapelopia sp. to be a more general indicator of quality not restricted to a certain type, in opposite to the taxa mentioned above.Preferrers of 'poor' conditions were Polypedilum pedestre with higher frequencies in type 12, but evenly abundant in type 20, and Clinotanypus nervosus (no chart).However, the correlation with the gradient was weak (-0.204; not significant).
A reduction of the taxonomic level to genus was also tested.The result of the scatter diagram was similar to the plot with higher taxonomic levels (Fig. 1).However, the gradient in the data was much less striking (Axis 1: 12.694 % of total variation; axis 2: 11.234 %; axis 3: 9.761) indicated also in a missing distinctive gradient pattern like in Fig. 2.Moreover, correlations were much lower and only very few of statistical significance.
Use for practice
The sum of all frequencies of each taxon in each quality class, regardless in which water body type it occurred (Tab.S1), provides a comprehensive overview of strong and weak indicators of conditions.Vertical sorting the frequencies of the single taxa distributed in the quality classes illustrates the species shift from 'high' to 'bad' conditions.
DISCUSSION
The approach of these first evaluations of the database studied the distribution of chironomids along a gradient.This gradient is represented by five levels derived from bio-assessment based on macro-invertebrates including chironomids only on tribe level.In opposite to many others studies, which address the relation between communities and environmental factors, no ecological types as references should be inferred, here (as performed Tab. 2. Correlation of chironomid taxa frequencies with score of the DCA axis 1 (Spearman rank correlation).The list is restricted to taxa with a coefficient >0.5.Positive coefficients (rS) indicate a preference for better, negative for worse conditions.P, level of significance.The full list is provided as Tab.S2. e.g. in Ehlert et al., 2002, Sanchéz-Montoya et al., 2007, Hawkins and Vinson, 2000).This investigation focussed on the adoption of chironomids on a precise taxonomic level for an existing assessment system based on ecological types elaborated earlier.The results show that the distribution of the chironomids follow to a greater part the a priori classification given by the non-chironomid macro-invertebrates.
The DCA revealed that the distribution of the taxa is determined by both, water body type and degradation, however, with different patterns.On the one hand, the water body types segregate from each other, in some cases clearly, regardless of their ecological quality status.The diagonal distribution of the water body types in the plot may suggest a gradient following a certain environmental factor not measured, here.Indeed, in the left part of the chart, the points for two larger water bodies (Large sand and loamdominated lowland rivers, and Very large sand-dominated rivers) are located, but elsewhere other large waters are mixed with smaller ones, so that it is not convincing to consider water body size as a driving force.It may be possible to find such a factor in a single parameter used while establishing the water body typology (Lorenz et al., 2004), but this is not analysed, here.
On the other hand, the plots showed a shift of communities within a degradation gradient indicating a second driving force for the taxa distribution.This suggests chironomids to react in a similar direction like other macro-invertebrates, from which the data of the water qualities were derived.
The constant drift of community with declining water body quality (Fig. 2) was the rule for most of the water body types included here indicating a constant decreasing quality.However, in some other water body types, the taxon distributions changed probably under non-constant conditions along the degradation gradient due to discontinuous change or loss of typical habitats in certain degradation levels (e.g. from 'poor' to 'bad' in type 11).The loss of habitat types under degradation conditions and the consequences for a correct assessment was investigated by Marziali et al. (2010) or Odume et al. (2015).From both studies can be concluded that problems in assessment of degradation may occur, when communities are not sampled from the same habitat along a pollution gradient due to loss or temporal absence.
In type 21 (Lake outflows) and type 16 (Small graveldominated lowland rivers), the changes of communities seemed to be small.In both types with data from 'high' status, the starting point seems to be very far from the next quality level.However, as only 6 water bodies were sampled, these data are not reliable, and the scale for taxa preferences remains restricted to the range from 'good' to 'bad', so far.In following studies, more data from more samples of 'high' quality waters are desirable, if available.However, this should be respected and tested later, as the goal of this paper was to investigate general patterns and tendencies of ecological distribution.
Breaking down the distribution in the quality classes to each water body type revealed taxa preferences i) only in certain water body types, or ii) general preferences regardless of water body type (e.g.Conchapelopia sp.).In the first case, the preferences are of importance for the validation only for the respected water body type(s).In the second case, the taxa of concern are candidates for general bio-indication, no matter in what water body it is sampled.When the results are used for validating the indicator values, this has to be respected.Summing up all frequencies of each taxon in each quality class regardless in which water body type it occurred (sorted list, Tab.S1) makes the preferences ready for further statistical evaluation.The results can be used, in general, for evaluating existing indices and preferences or for elaborating indices and weights of powerful indicator taxa for classifications or revisions.However, before implementing the results have to be discussed (referring to indicator classifications from other countries, e.g.Janecek et al., 2017a and references therein) or tested for plausibility before use.
The reduction of taxonomic level to genus may be useful for special questions and help to facilitate the work in practice.The results followed the general trends based on more precise resolution, but were much less significant.This corresponds to the findings by Greffard et al. (2011).But the authors concluded that the highest taxonomic level is recommended for more precise and detailed information on environmental condition.However, this should be studied more in detail and very carefully.In those genera, which comprise species of similar ecological preferences, this may be justified (e.g.Endochironomus), while in others, the differences are remarkable (Polypedilum pedestre prefers worse conditions, in opposite to P. cultellatum, which was more frequent in higher quality classes).
CONCLUSIONS
The analysis of the chironomid taxa records from the European central lowland region in Brandenburg showed that the distribution of the taxa is driven by their occurrences in i) water body types and ii) quality classes.The results can be used for validation and establishing specific indicator values for bio-assessment.However, some taxa showed clear quality preferences only in some water body types.This allows the classification of a series of taxa according their preferences and opens the insect group for a broader and useful application in degradation assessment than used now in many protocols.Especially in water bodies, from which only a too small number of species and individuals of non-chironomid macroinvertebrates are collected for a reliable assessment, midge larvae and pupae can provide for robust results.However, further studies should be performed to elucidate the indication power of the taxa in the water body types considered here.The next step for practical using the results from this study should be to compare the findings of the taxa's preferences with expert knowledge and references in the literature, and so, enhance both the power and usefulness of indication and quality of assessment systems.
C. Orendt programme and the management of the database.Many thanks for their co-operation and the supply of the data.Thanks also to two unknown reviewers for their thorough examinations and useful comments to improve the manuscript.
taxa deriving from the monitoring as mentioned above.
two different views.In Fig.1(left panel), the data points of each quality state are marked with the same colour of hulls and symbols indicating a gradual shift from 'bad' to 'good' in half-right upwards direction.
Fig. 1 .
Fig. 1.Plot of the axis scores from the DCA (axis 1: 20.76% of total variance, Eigenvalue 0.3136; axis 2: 7.39%, Eigenvalue 0.1116; axis 3: 3.14%, Eigenvalue 0.0478).Left panel: colours of the hulls represent the same degradation level in the water body types (red, bad; orange, poor; green, moderate; dark blue, good; light blue, high.Right panel: colours of the hulls represent the same water body type (definitions for the representing numbers: see Tab. 1).
Fig. 2 .
Fig. 2. Plot of the scores from the 1st axis in the degradation classes of the water body types.For definitions of the representing numbers: see Tab. 1.
Fig. 3 .
Fig. 3. Frequencies of selected taxa broken down to their occurrences in the water body types and the quality classes. | 2019-04-03T13:06:13.459Z | 2018-07-11T00:00:00.000 | {
"year": 2018,
"sha1": "771dd3ccd9ebabc37b0dba17cb4cfc2565a8123e",
"oa_license": "CCBYNC",
"oa_url": "https://jlimnol.it/index.php/jlimnol/article/download/jlimnol.2018.1790/1460",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "771dd3ccd9ebabc37b0dba17cb4cfc2565a8123e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
18246880 | pes2o/s2orc | v3-fos-license | The Effects of Annatto Tocotrienol on Bone Biomechanical Strength and Bone Calcium Content in an Animal Model of Osteoporosis Due to Testosterone Deficiency
Osteoporosis reduces the skeletal strength and increases the risk for fracture. It is an underdiagnosed disease in men. Annatto tocotrienol has been shown to improve bone structural indices and increase expression of bone formation genes in orchidectomized rats. This study aimed to evaluate the effects of annatto tocotrienol on biomechanical strength and calcium content of the bone in orchidectomized rats. Thirty three-month-old male Sprague-Dawley rats were randomly assigned to five groups. The baseline control (BC) group was sacrificed at the onset of the study. The sham-operated group (SHAM) received olive oil (the vehicle of tocotrienol) orally daily and peanut oil (the vehicle of testosterone) intramuscularly weekly. The remaining rats were orchidectomized and treated with three different regimens, i.e., (1) daily oral olive oil plus weekly intramuscular peanut oil injection; (2) daily oral annatto tocotrienol at 60 mg/kg plus weekly intramuscular peanut oil injection; (3) daily oral olive oil plus weekly intramuscular testosterone enanthate injection at 7 mg/kg. Blood, femur and tibia of the rats were harvested at the end of the two-month treatment period for the evaluation of serum total calcium and inorganic phosphate levels, bone biomechanical strength test and bone calcium content. Annatto-tocotrienol treatment improved serum calcium level and tibial calcium content (p < 0.05) but it did not affect femoral biomechanical strength (p > 0.05). In conclusion, annatto-tocotrienol at 60 mg/kg augments bone calcium level by preventing calcium mobilization into the circulation. A longer treatment period is needed for annatto tocotrienol to exert its effects on bone strength.
Introduction
Osteoporosis is characterized by a reduction in bone density and quality, ultimately leading to reduced bone strength and increased fracture risk, particularly at the hip, spine and wrist [1]. The prevalence of osteoporosis is rising concurrently with the increase in life span of the elderly population globally. Estimation in the year 2000 revealed that nine million osteoporotic fractures occurred worldwide [2]. The European countries contributed to the highest number of cases and the most number of disability adjusted life years lost due to osteoporotic fractures [2]. However, projection shows that by the year 2050, 50% of osteoporotic hip fractures will occur in Asia [3]. Osteoporosis is more prevalent in women compared to men [2]. Despite this, male osteoporotic fracture patients (Jesalis Pharma, Jena, Germany) was diluted with peanut oil (Sime Darby, Subang Jaya, Malaysia) and administered intramuscularly at a dose of 7 mg/kg body weight.
Thirty three-month-old male Sprague-Dawley rats were obtained from the Laboratory Animal Resource Unit, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The rats were housed individually in plastic cages and provided with free access to tap water and standard rat chow (Gold Coin, Port Klang, Malaysia) under room temperature and natural light/dark cycle. Following a week of acclimatization, the rats were randomized into five groups of six rats, namely baseline control (BC), sham-operated (SH), orchidectomized control (ORCH), annatto tocotrienol-treated (AnTT), and testosterone enanthate-treated (TE) groups. The baseline control (BC) group was sacrificed at the onset of the study. The ORCH, AnTT and TE groups underwent bilateral orchidectomy. The SH group was subjected to similar surgical stress, but their testes were retained. The AnTT group was treated with oral annatto tocotrienol 60 mg/kg daily while the SH, ORCH and TE groups were given equal volume of olive oil orally daily. The TE group received intramuscular testosterone enanthate at 7 mg/kg weekly while the other groups received equal volume of intramuscular peanut oil injection weekly. After eight weeks of treatment, all rats were sacrificed and their bones were harvested and kept under −70 • C for biomechanical strength test and calcium content assay.
Biochemical Analysis
Blood of the rats was collected at the end of the treatment period prior to euthanasia. Serum was extracted immediately after centrifuging the blood at 3000 rpm for 10 min at 4 • C. The serum was stored at −70 • C until analysis. Calcium and phosphate were measured using the QuantiChrom™ Calcium and Phosphate Assay Kit respectively (BioAssay Systems, Hayward, NY, USA) based on calorimetric method.
Bone Biomechanical Strength Test
The biomechanical strength test was conducted using Instron Universal Testing Machine (5560 Instron, Canton, OH, USA) ( Figure 1) with Bluehill 2 software package (Instron, Canton, OH, USA). Briefly, the right femur cleaned of soft tissue was kept wet by phosphate-buffered saline socked gauze before the testing. It was mounted on two inferior supports and a load (speed 5 mm/min; span length 10 mm) was applied to the midshalf on its anterior surface until fractured. The data was then analyzed by Bluehill software to calculate load (N), strain (MPa), stress (mm/mm) and extension (MPa) of the bone.
Bone Calcium Content
The right tibias, cleaned of soft tissues, were dried in an oven at 100 • C for 24 h. They were then ashed in a furnace at 800 • C for 12 h. The ash was weighed and dissolved in 3 mL of nitric acid and later diluted in lanthanum chloride. Calcium chloride was measured with an Atomic Absorption Spectrophotometer (Shimadzu AA-680, Shimadzu, Kyoto, Japan) at 422.7 nm.
Statistical Analysis
Data analysis was performed using the Statistical package for Social Sciences software version 20.0 (IBM, Armonk, NY, USA). The Shapiro-Wilk test was used to assess the normality of the data. Normally distributed variables were analyzed by using analysis of variance (ANOVA) followed by Tukey's post-hoc test. Skewed data were analyzed using Kruskal-Wallis test and Mann-Whitney U-test as a post-hoc pairwise comparison. The statistical differences were assumed significant at p < 0.05. The results were expressed as mean values ± standard error of the mean (SEM).
Results
The final body weight among the treatment groups was similar, with the exception that ORCH and AnTT groups were significantly heavier than BC group (p < 0.05) ( Figure 2).
There was no significant difference in serum total calcium level between SH and ORCH group (p > 0.05). Serum total calcium level was significantly lower in AnTT and TE group compared to ORCH and SH group (p < 0.05) ( Figure 3A). No significant difference was detected in serum inorganic phosphate level among SH, ORCH, AnTT and TE groups (p > 0.05) ( Figure 3B).
Results
The final body weight among the treatment groups was similar, with the exception that ORCH and AnTT groups were significantly heavier than BC group (p < 0.05) ( Figure 2).
There was no significant difference in serum total calcium level between SH and ORCH group (p > 0.05). Serum total calcium level was significantly lower in AnTT and TE group compared to ORCH and SH group (p < 0.05) ( Figure 3A). No significant difference was detected in serum inorganic phosphate level among SH, ORCH, AnTT and TE groups (p > 0.05) ( Figure 3B).
Results
The final body weight among the treatment groups was similar, with the exception that ORCH and AnTT groups were significantly heavier than BC group (p < 0.05) ( Figure 2).
There was no significant difference in serum total calcium level between SH and ORCH group (p > 0.05). Serum total calcium level was significantly lower in AnTT and TE group compared to ORCH and SH group (p < 0.05) ( Figure 3A). No significant difference was detected in serum inorganic phosphate level among SH, ORCH, AnTT and TE groups (p > 0.05) ( Figure 3B). tocotrienol-supplemented group; BC, baseline control group; ORCH, orchidectomized group; SH, sham-operated; TE, testosterone enanthate-supplemented group.
Figure 3.
Post-treatment serum total calcium (A) and inorganic phosphate level (B) in rats. The data are shown as mean with standard error of the mean. Letter 'a' indicates significant difference versus the baseline group; 'b' versus the sham-operated group; 'c' versus the orchidectomized group. Abbreviation: AnTT, annatto tocotrienol-supplemented group; BC, baseline control group; ORCH, orchidectomized group; SH, sham-operated; TE, testosterone enanthate-supplemented group.
Bone calcium level was not significantly different between the SH and ORCH group (p < 0.05). Both AnTT and TE group had significantly higher bone calcium level compared to ORCH group (p < 0.05). Annatto tocotrienol treatment increased the calcium content more than testosterone enanthate (p < 0.05) (Figure 4). Bone calcium level was not significantly different between the SH and ORCH group (p < 0.05). Both AnTT and TE group had significantly higher bone calcium level compared to ORCH group (p < 0.05). Annatto tocotrienol treatment increased the calcium content more than testosterone enanthate (p < 0.05) (Figure 4). There were no significant changes in biomechanical strength indices between SH and ORCH groups (p > 0.05). Supplementation with annatto tocotrienol did not improve the biomechanical strength of the femur in rats (p > 0.05). Load, stress and strain of the rats treated with testosterone were significantly higher compared to the AnTT group (p < 0.05). No significant difference was found in the extension of the femur among the study groups (p > 0.05) ( Figure 5A-D). There were no significant changes in biomechanical strength indices between SH and ORCH groups (p > 0.05). Supplementation with annatto tocotrienol did not improve the biomechanical strength of the femur in rats (p > 0.05). Load, stress and strain of the rats treated with testosterone were significantly higher compared to the AnTT group (p < 0.05). No significant difference was found in the extension of the femur among the study groups (p > 0.05) ( Figure 5A-D). There were no significant changes in biomechanical strength indices between SH and ORCH groups (p > 0.05). Supplementation with annatto tocotrienol did not improve the biomechanical strength of the femur in rats (p > 0.05). Load, stress and strain of the rats treated with testosterone were significantly higher compared to the AnTT group (p < 0.05). No significant difference was found in the extension of the femur among the study groups (p > 0.05) ( Figure 5A-D).
Discussion
The present study showed that testosterone deficiency did not cause significant changes in circulating total calcium and inorganic phosphate levels, bone calcium content and bone biomechanical strength (p < 0.05). Annatto tocotrienol suppressed serum total calcium level and increased femoral calcium content in orchidectomized rats (p < 0.05). However, these changes did not translate to an augmented bone biomechanical strength in the annatto tocotrienol-supplemented rats (p > 0.05). Testosterone exerted similar effects as tocotrienol on serum total calcium level and bone calcium content (p < 0.05). The femoral load, stress and strain of testosterone-treated rats were better than annatto tocotrienol-treated rats (p < 0.05).
Testosterone deficiency did not induce significant alteration in serum calcium level in male rats. This observation is supported by a previous study showing that testosterone played a minor role compared to estrogen in calcium regulation in men [31]. In that study, men were deprived of both testosterone and estrogen, and estrogen experienced an increase in serum calcium level compared to baseline, but the serum calcium level in men deprived of testosterone did not change significantly [31]. In another study, high-dose testosterone enanthate supplementation for 16-20 weeks in a group of healthy young men lowered their serum calcium level significantly without changing their calcium excretion pattern [32]. Therefore, calcium level is affected by high-dose exogenous testosterone exposure, but not by variation in endogenous testosterone.
The studies of annatto tocotrienol on serum calcium and phosphate levels, bone mineral content and bone mechanical strength in the testosterone deficiency model are limited. Therefore, comparison was made with studies using other models of bone loss and tocotrienol derived from other sources in the following discussion. In this study, both annatto tocotrienol and testosterone significantly lowered the serum total calcium level, presumably by preventing mobilization of skeletal calcium into the circulation. Decreased mobilization of calcium reserve caused by annatto tocotrienol was evidenced by higher femoral calcium content in the AnTT group. This was consistent with the study by Ima Nirwana et al., which demonstrated that palm vitamin E 60 mg/kg for eight months preserved femoral and vertebral calcium content in orchidectomized rats [30]. On the other hand, Muhammad et al. showed that palm tocotrienol treatment at 60 mg/kg for eight weeks did not increase lumbar calcium content in ovariectomized rats [33]. Extended treatment of palm vitamin E at 30 mg/kg and 60 mg/kg for 10 months also failed to augment lumbar and femoral calcium level in ovariectomized rats [34]. Thus, there might be gender differences in the skeletal protective effects of tocotrienol, whereby it is more effective in preserving calcium in male rats.
The calcium conserving effects of tocotrienol have been implied by many previous studies. Norazlina et al. indicated that vitamin E was essential for normal calcium metabolism. Rats receiving vitamin E-deficient diet suffered from increased parathyroid level, impaired calcium absorption and low calcium content in the lumbar spine [35,36]. Supplementation with gamma-tocotrienol could preserve the calcium content in rats fed with vitamin E-deficient diet [37].
Osteoclasts are the bone cells responsible for bone resorption [38]. Thus, reducing osteoclastic activity could prevent bone resorption that releases calcium from the bone into the circulation. Chin et al. showed that supplementation of annatto tocotrienol at 60 mg/kg decreased the osteoclast number and eroded surface on the trabecular bone of orchidectomized rats [28]. Similar observations were also obtained in studies supplementing annatto tocotrienol or palm tocotrienol in estrogen-deficient rats [26]. In vivo studies also demonstrated that individual isomers of tocotrienol could prevent the formation of osteoclast-like cells and inhibit their resorption activity [39]. The reduction in osteoclastic activity caused by annatto tocotrienol could contribute to lower circulating calcium level in the rats. However, the circulating phosphate level was not altered significantly in rats receiving tocotrienol treatment. Circulating phosphate level might be regulated more tightly, thus its variations could be less apparent. Besides, the breakdown of hydroxyapatite, the predominant form of calcium storage in the bone, yields five calcium ions and three phosphate ions. Thus, variation in bone resorption activity might give rise to greater changes in calcium level than in phosphate level. Besides, improvement in bone calcium content in the rats treated with annatto tocotrienol might be contributed by enhanced bone formation activity. Chin et al. demonstrated that annatto tocotrienol at 60 mg/kg for eight weeks specifically increased the expression of bone formation genes coding for alkaline phosphatase, collagen type I alpha 1, osteopontin and beta-catenin in orchidectomized rats [29]. In an osteopenic model due to estrogen deficiency, palm tocotrienol at 60 mg/kg for two months increased the mineral apposition rate and bone formation rate in rats [40,41]. Gamma-tocotrienol and palm tocotrienol-rich fraction at 60 mg/kg for two months was able to exert similar actions in a bone loss model due to nicotine in male rats [42]. Moreover, previous studies showed that annatto-tocotrienol supplementation could increase osteoblast number and osteoid volume in gonadectomized rats [26,28]. These studies showed that both annatto and palm tocotrienol could increase osteoblastic proliferation, survival and activity in osteopenic rats, thus contributing to increased mineralization and calcium storage in the bone.
Material and geometric properties are the main contributors of the skeletal biomechanical strength [43]. Yarrow et al. showed that orchidectomy did not significantly reduce the load and stiffness of the bone in male rats [44]. The enhanced bone calcium content caused by annatto tocotrienol should have improved the material properties of the bone, but it did not translate to better biomechanical indices in the AnTT group. It was speculated that the geometrical properties of the bone were not altered in eight weeks by annatto tocotrienol supplementation. This was supported by a previous observation that supplementation of annatto tocotrienol at 60 mg/kg for eight weeks did not improve the structural model index, which is an assessment of geometrical structure of trabecular bone generated using micro-computed tomography, in orchidectomized rats [29]. Using an estrogen-deficient bone loss model, Muhammad et al. and Nazrun et al. showed that palm tocotrienol at 60 mg/kg for two months did not increase the biomechanical strength of osteopenic rats [33,45]. However, Shuid et al. showed that gamma-tocotrienol at 60 mg/kg for two months was able to increase biomechanical strength of the femur in normal male rats [46]. We hypothesize that tocotrienol might need a longer time to augment bone biomechanical strength in rats deficient in sex hormone compared to normal rats. On the other hand, bone biomechanical strength was better in rats treated with testosterone compared to rats treated with annatto tocotrienol. This was consistent with the previous findings that improvements in bone structural parameters were better in testosterone-treated rats compared to annatto treated rats [29]. Despite the lack of difference in bone strength between rats with and without testosterone deficiency, Yarrow et al. demonstrated that testosterone enanthate at 7 mg/kg could improve load but not stiffness of the bone [44].
Several limitations need to be addressed in this study. The study duration was eight weeks, which might be insufficient to augment the bone biomechanical strength of osteopenic rats. Better results might be achieved by prolonging the study period. Examination of the calcium homeostasis in the rats by isotopic methods might clarify the effects of annatto tocotrienol on calcium better. Nevertheless, this is the first study that examined the effects of skeletal biomechanical strength and bone calcium content in testosterone-deficient rats treated with annatto tocotrienol.
Conclusions
In conclusion, annatto tocotrienol at 60 mg/kg for eight weeks is able to prevent mobilization of calcium from the bone into the circulation and increase skeletal calcium content. However, the dose and treatment should be adjusted to improve its effects on bone biomechanical strength. Further studies are warranted to justify a trial on the application of annatto tocotrienol in testosterone-deficient male osteoporotic patients. | 2018-04-03T00:45:31.613Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "702c48d4e3da6a118f41cbe19459b6a107741e21",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/8/12/808/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "702c48d4e3da6a118f41cbe19459b6a107741e21",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216317094 | pes2o/s2orc | v3-fos-license | Legal and Ethical Challenges in Developing a Dutch Nationwide Hepatitis C Retrieval Project (CELINE)
In 2016 the World Health Organization (WHO) called upon nations worldwide to eliminate viral hepatitis. Due to suboptimal hepatitis C virus (HCV) therapies in the past, many patients could not be treated or cured. With the current options, all patients can be treated and >90% is cured. However, these developments have not reached all patients, especially those who were lost to follow-up (LTFU) in previous years, an estimated 30% in the Netherlands. Retrieving these patients can contribute to HCV elimination. In light of this, we aimed to develop a nationwide retrieval strategy. During development we identified four major challenges. The first challenge is ethical and arises from the aim of the project: should physicians retrieve LTFU patients? We argue that the arguments in favour outweigh those against. The three other challenges are methodological and mainly legal in nature. Firstly, how far back are we allowed to trace LTFU patients? In the Netherlands, patient files should be kept for a minimum of fifteen years, but in chronic disease they may be archived longer. Secondly, which professional should identify the LTFU patients? Ideally this would be the treating physician, but we describe the circumstances that allow inclusion of assistance. Lastly, what is the proper way to invite the LTFU patients? We found that we can often request current address information from municipalities, and explain this process in detail. The offered solutions are feasible and translatable to other healthcare environments. We hope to take away any insecurities people may have about the ethical and legal nature of such a retrieval project and hope to inspire others to follow in our footsteps.
Introduction
Hepatitis C virus (HCV) infection is a cause of liver disease that becomes chronic in 70%-75% of cases. Infection may result in life-threatening complications such as cirrhosis, hepatocellular carcinoma, and death. With 71 million people affected worldwide, global annual HCV mortality has increased in the past 15 years. 1 In 2016, the World Health Organization (WHO) has set viral hepatitis elimination goals, which call for a 90% reduction in new chronic infections and a 65% reduction in mortality by 2030. Numerous countries, including the Netherlands, have agreed to comply with these goals. A Dutch national hepatitis plan was developed in 2016, focusing on five key areas of interest: (1) awareness and vaccination, (2) identification of infected patients, (3) diagnostics and treatment, (4) improved organization of hepatitis care and (5) surveillance of identified patients.
The third key area is of particular interest to us: diagnostics and treatment. Until 2014, the standard of care for chronic HCV patients was pegylated interferon with ribavirin, a lengthy, moderately effective and ill-tolerated treatment which cured only 40%-80% of patients. As a result, many patients had no treatment options or declined receiving treatment. In the Netherlands, it is estimated that ~30% of all diagnosed HCV patients have disappeared from care (lost to follow-up: LTFU). 2,3 The advent of direct acting antivirals (DAAs) in 2014 completely changed the therapeutic landscape of HCV. With an average treatment duration of 8-12 weeks, cure rates are >90%. The only remaining challenging group are patients with decompensated cirrhosis. 4 Unfortunately, these novel therapeutic developments have not reached many LTFU patients. They are still at risk for liver related complications and would benefit greatly from reassessment. This calls for a systematic search for the LTFU population: retrieval. 5 In the past decade, numerous regional HCV retrieval projects have been carried out in the Netherlands. CELINE ('Hepatitis C elimination in the Netherlands') is the first nationwide approach and was developed based on these regional projects. This paper outlines the CELINE retrieval strategy which comes with various ethical and legal challenges. We aim to offer solutions and provide a legal framework for clinicians and researchers interested in retrieval.
CELINE Methodology
As described in Figure, CELINE consists of four phases. In the first phase, laboratory records and patient charts are reviewed to identify patients who are LTFU. HCV antibody tests, Western blots, RNA tests, and genotyping results are reviewed. We identify patients who are possibly chronically infected and patients who were chronically infected at the time of the last test. The first group consists of patients who have a positive anti-HCV test without a known RNA result. The second group consists of patients in whom the last RNA result was positive. Patients records from both groups will be reviewed to ascertain whether they are LTFU.
The review of laboratory and patient records results in a cohort of LTFU patients. Patients residing in the Netherlands are invited via letter to be re-evaluated during phase 2. Patients who are 18 years or older are furthermore informed about the CELINE research project, which consists of patient file research. Patients are contacted by phone 1-2 weeks afterwards to ascertain if they want to be re-evaluated. When patients do wish to be re-linked to care, their general practitioner is contacted and asked for a referral letter.
Phase 3 consists of re-evaluating the LTFU patients. This re-evaluation is part of standard clinical care and serves no research purpose. If the patient wants to participate in the CELINE research project, consisting of patient file research, informed consent will be signed during this visit. Data on patient and disease characteristics and retrieval results of patients who have signed informed consent will be collected during phase 4. This data is pseudonymized and stored in a validated and Good Clinical Practice compliant web-based data management program, Castor EDC. Only the local physicians and/or researchers will have access to the local source file linking the codes to specific patients.
The primary outcome of the CELINE research project will be the total number of LTFU patients who have been successfully linked to care. Secondary outcomes include HCV prevalence, number of already successfully treated HCV patients, number of LTFU HCV-positive patients, reasons for LTFU and genotype prevalence, transmission route and liver fibrosis stage (progression) of the LTFU population.
We hypothesize that approximately 25% of ever-diagnosed patients is LTFU and that we will be able to link 25% of invited patients to care again. Estimating a diagnosed population of 16 000, this corresponds to 4000 LTFU patients of whom 1000 will be re-linked to care.
Legal and Ethical Issues Arising During Development of CELINE
CELINE aims to identify LTFU chronic HCV patients and link them to care. The fundamental question that arises may be defined as follows: 1. Should physicians retrieve LTFU patients? There are several arguments that favour retrieval. Hepatitis C infection causes liver related morbidity and mortality, and as a result an estimated 300 people die each year in the Netherlands. 6 DAA treatment is fully reimbursed by Dutch healthcare insurance and offering treatment to LTFU patients has many advantages. First, the burden of disease can be lifted from symptomatic patients. Second, treatment curtails development of complications. If patients were retained in care, worsening of their disease would have led to therapy. Since these patients became LTFU, we expect their disease to have progressed further compared to patients who were retained in care. Furthermore, we expect LTFU patients to have more additional risk factors for severe liver disease compared to the population retained in care, such as alcohol or drug use. Treating these patients prevents development of (further) morbidity and mortality, as was shown in a study of Willemse et al. 7 Treating patients in an early stage of the disease may also offer an economical advantage. Successful treatment will prevent development of both extra-hepatic and hepatic complications, which otherwise would require longterm and costly monitoring and/or treatment. 8 A systematic review looking at modelling studies performed in the DAA era concluded that scale-up of treatment was generally costeffective compared to more restrictive treatment. 9 The fourth advantage of treating LTFU patients is population-based. HCV transmission, though largely limited to specific risk groups in the Netherlands, still occurs in the current day and age. Treatment as prevention is an effective strategy. For example, HIV transmission can be prevented by antiretroviral therapy. 10 In view of the presence of highly-effective DAAs, treatment as prevention is a realistic prospect. This is mostly the case in risk groups with ongoing transmission, such as men who have sex with men (especially HIV-positive men and pre-exposure prophylaxis users) and injecting/ intranasal drug users. In this case, retrieval serves to protect uninfected individuals. Finally there is a moral argument: the new therapeutic options are universally effective which is a paradigm with the situation when they left care. Many physicians will feel morally and ethically obliged to inform patients about the greatly improved outlook. Retrieval itself does not violate the patient's 'right not to know, ' since they have already received their HCV diagnosis in the past. However, there are also counter arguments against retrieval. Firstly, since patients are no longer in care, physicians are not contractually obliged to re-establish contact with patients who are not currently being seen in their practice. On the other hand, if the arguments in favour of retrieval outweigh the potential disadvantages one could argue that there is no reason to refrain from participation. Thus, healthcare providers would be technically bound to retrieval, which would result in a situation where non-participating hospitals would be liable. If we did choose to start to retrieve LTFU patients suspected of chronic hepatitis C, we might want to extend retrieval projects to other disorders. However, it is difficult to identify robust criteria for disorders that merit retrieval and we run the risk that the line which distinguishes retrievable from non-retrievable disorders will get blurred easily. Chronic hepatitis C infection only causes complications after a protracted period of time in a limited subset of people. Thus, there is no clinical emergency situation requiring physicians to act immediately in order to avoid an acute and major health hazard. Lastly, retrieval could be seen as a violation of the patient's autonomy, or a certain 'right to be left alone. ' This autonomy could be overlooked relative to the public health advantage that retrieval provides. However, it is possible to curtail HCV transmission in other ways, such as education and improving awareness.
In their 2016 report, the Dutch Health Council advised the minister of Health, Welfare and Sport on screening and retrieval of hepatitis B and C. 11 The Council favoured retrieval of LTFU HCV patients and indicated that this strategy is an integral part of the (after) care for diagnosed HCV patients. The advice by the Council has been endorsed by the minister of Health.
In light of all these arguments, we think that physicians have a legal and moral right to retrieve their LTFU chronic HCV patients, though they are not legally or morally bound to do so. Even though the cost-saving argument does not outweigh the patient's autonomy in our opinion, we do not feel that retrieval threatens this autonomy. These patients have been diagnosed before and the information given during retrieval is noncommittal. The final argument in favour of a retrieval strategy is that CELINE has scientific aims. Within this scope, physicians-researchers are allowed to retrieve LTFU patients, as long as there is a clear protocol that has been approved by the appropriate regulatory bodies.
In summary, we conclude that retrieval of chronic HCV patients is possible from an ethical and/or legal perspective, when performed in a structured manner. We will now address the accompanying legal challenges.
Legal and Ethical Issues Arising During Development of CELINE Methodology
CELINE aims to identify LTFU patients by retrospectively reviewing medical records. One of the first questions that arises is: 1. How far in the past is CELINE allowed to look in order to identify LTFU patients? The Dutch Medical Treatment Contracts Act (WGBO) article 454 states that medical records should be kept for a minimum period of 15 years. 12 However, this period can be elongated in case of chronic conditions. CELINE therefore aims to review these records as far back as possible. In other countries, the time span that caregivers should store medical records may vary. However, we would advise them to also review their records as far back as possible.
In CELINE, we chose to identify patients based on laboratory records, since the Netherlands lacks a national registry in which all diagnosed patients are registered. Using laboratory records ensures that we miss no patients, since the diagnosis is made based on blood test results. However, some countries might not be able to use laboratory records. Countries that have a national hepatitis registry, might consider identifying their patients using this database as an interface. Otherwise, the use of diagnostic coding systems, like the International Classification of Disease, might provide a good alternative.
A medical microbiologist will produce a list of all HCV tests performed. Microbiologists are allowed to share this list with other physicians, since they are regarded as a member of the treatment team according to article 457 of the WGBO, 12 which is endorsed by the Dutch Health Council. 11 Legislation may vary in other countries. When microbiologists are not allowed to share their list of possible LTFU patients, they could theoretically retrieve these patients themselves. However, retrieval based on only laboratory results will likely result in contacting many patients who are already cured or still in care.
The laboratory records will be reviewed in order to select patients in whom the last test result was positive, indicating that they were still infected when they left care. Hereafter, chart review is performed in order to ascertain LTFU status. This two-step selection process gives rise to the following challenge: 2. Who should perform selection of laboratory records and chart review in order to identify LTFU patients?
The information that has to be reviewed contains personal data, which is privileged information only divulged to members of the treatment team. Unfortunately, previous regional retrieval efforts showed that the reviewing of this data is a time-consuming process. It would be valuable if an external party could review the records and identify LTFU patients. However, this idea needs careful exploration.
The WGBO states that in order to access a patient's medical records, the patient has to give permission. 12 However, the WGBO has introduced an exception if obtaining permission is impossible. In the case of CELINE, hundreds of patients would have to be contacted by their treating physician to ask permission to review their medical files. This would require an extraordinary amount of time and effort. The main condition for not obtaining the patient's permission is stated in article 458 of the WGBO: the patient's privacy must not be disproportionately compromised. As a consequence the external party has to be trained in medical confidentiality and patient privacy. The external party must sign a confidentiality agreement prior to record review. Only data pertaining the LTFU status of the patient should be reviewed. Identifiable data should not be collected of patients who cannot be invited for re-evaluation (eg, deceased patients) or who have actively objected against the exchange of their medical records. Identifiable data of patients who object to re-evaluation should be removed. Finally, these privacy-protecting measures have to be reviewed by the institutional review board, which should give a final ruling and monitor the process.
In other countries, legislation may vary. If an external party was not allowed to review patient records without prior permission, there are several options. In any case, the laboratory records would have to be reviewed by a microbiologist first, to identify patients that might be LTFU. Subsequently there are two options. First comes the option for a member of the treatment team to review the records. The second option is to ask permission of patients to review their records. This requires pre-emptive contact with the individual patients, which presents an opportunity to offer them re-linkage to care immediately, without reviewing their medical records first. However, it is likely that many patients who are contacted are already cured or still in care.
After identification of the LTFU patients, the patient will be invited for re-evaluation. 3. What is the best way to invite patients? The safest way to reach LTFU patients is by written invitation, if the sender is sure that the address is correct. In case of LTFU patients, address information might not have been updated for years. Therefore we obtain current address information from the Municipal Personal Records Database (Basisregistratie Personen, BRP) when possible. Each person residing in the Netherlands for at least four months is obligated to enlist in the BRP. Organizations of public or social importance can request authorization to obtain this information, as is stated in article 3.2 of the Personal Records Database Act. 13 Hospitals for instance request authorization in view of optimal patient care or for medical research. In hospitals that do not have access to the BRP, CELINE collaborators should make a reasonable effort to ascertain the current address for their LTFU patients. This also applies to caregivers in other countries, who cannot request address information from municipalities. We advise them to contact other healthcare providers of the patient to retrieve the patient's address. In CELINE for example, we contact their general practitioner. If the patient is not in contact with any known healthcare provider, a test letter could be sent to the last known address, without mentioning hepatitis C. If caregivers cannot ascertain the current address for the patient, the invitation should not be sent.
Conclusion
Retrieval of LTFU patients should only be executed when the benefits outweigh the disadvantages, as is the case in chronic hepatitis C. CELINE is the first nationwide retrieval project in the Netherlands. We have identified four major challenges of ethical and legal nature. We believe that these challenges can be translated to both HCV-and non-HCV-related retrieval projects. We have provided solutions that can be used in the Netherlands and other countries, showing that retrieval can be done when done carefully. We hope to take away any insecurities people may have about the ethical and legal nature of retrieval projects in the Netherlands and hope to inspire healthcare professionals and policy-makers in other countries to develop their own retrieval strategy that accommodates their healthcare and legal system. | 2020-04-27T20:39:54.149Z | 2020-03-07T00:00:00.000 | {
"year": 2020,
"sha1": "d4e13c23e46582d6c9a4ca0602781329dfc4275f",
"oa_license": "CCBY",
"oa_url": "https://www.ijhpm.com/article_3770_540edbf48ceb7c99bd3fc4ddf1f4dd71.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea57af3ee3a00e9785b796583d4a31ebcda300ba",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260679718 | pes2o/s2orc | v3-fos-license | Baseline Tumor Size as Prognostic Index in Patients With Advanced Solid Tumors Receiving Experimental Targeted Agents
Abstract Background Baseline tumor size (BTS) has been associated with outcomes in patients with cancer treated with immunotherapy. However, the prognostic impact of BTS on patients receiving targeted therapies (TTs) remains undetermined. Methods We reviewed data of patients with advanced solid tumors consecutively treated within early-phase clinical trials at our institution from 01/2014 to 04/2021. Treatments were categorized as immunotherapy-based or TT-based (biomarker-matched or not). BTS was calculated as the sum of RECIST1.1 baseline target lesions. Results A total of 444 patients were eligible; the median BTS was 69 mm (IQR 40-100). OS was significantly longer for patients with BTS lower versus higher than the median (16.6 vs. 8.2 months, P < .001), including among those receiving immunotherapy (12 vs. 7.5 months, P = .005). Among patients receiving TT, lower BTS was associated with longer PFS (4.7 vs. 3.1 months, P = .002) and OS (20.5 vs. 9.9 months, P < .001) as compared to high BTS. However, such association was only significant among patients receiving biomarker-matched TT, with longer PFS (6.2 vs. 3.3 months, P < .001) and OS (21.2 vs. 6.7 months, P < .001) in the low-BTS subgroup, despite a similar ORR (28% vs. 22%, P = .57). BTS was not prognostic among patients receiving unmatched TT, with similar PFS (3.7 vs. 4.4 months, P = .30), OS (19.3 vs. 11.8 months, P = .20), and ORR (33% vs. 28%, P = .78) in the 2 BTS groups. Multivariate analysis confirmed that BTS was independently associated with PFS (P = .03) and OS (P < .001) but not with ORR (P = .11). Conclusions Higher BTS is associated with worse survival outcomes among patients receiving biomarker-matched, but not biomarker-unmatched TT.
Implications for Practice
Baseline tumor size (BTS) has been used as a surrogate marker of tumor burden and its prognostic value in patients with advanced solid tumors receiving immunotherapy is well established.Fewer data is available regarding its role in patients treated with targeted therapies (TTs).In this retrospective study, we found a significant association between BTS and outcomes among patients with advanced solid tumors receiving experimental TTs, but only when these agents were matched to a specific molecular biomarker.If validated, BTS could represent an accessible and promising biomarker for risk-adapted treatment decision-making in clinical practice.In addition, it could be a useful stratification factor in clinical trials testing novel anticancer drugs.
Introduction
The extension of solid tumors at diagnosis, namely disease stage, has traditionally driven the choice of treatment (surgery, radiotherapy, and systemic therapy) of non-metastatic disease.2][3] However, no such subdivision exists for tumors once they have spread to distant sites.With few exceptions, the intent of systemic therapy for metastatic solid tumors is palliative and is not based on the burden of disease.
In recent years, several studies have shown the relevant prognostic impact of baseline disease burden in patients with metastatic cancer.Most of the available evidence emerged with the use of immune-checkpoint inhibitors (ICIs) for the treatment of patients with advanced melanoma, 4 non-small-cell lung cancer (NSCLC), 5 and head and neck cancer. 6For all these indications, ICIs showed more favorable treatment outcomes in patients with lower baseline disease burden, either assessed through computed tomography (CT) or through positron emission tomography (PET) scans.Additionally, our group has confirmed the prognostic role of CT-based baseline tumor burden among patients treated with next-generation immunotherapy agents within early-phase clinical trials, potentially highlighting the broad validity of this association among different tumor types. 7he prognostic role of baseline tumor burden among cancer patients treated with other treatment modalities remains instead undefined.Targeted therapies (TTs) are emerging as a highly effective treatment for multiple tumor types, with some showing efficacy even independently from the histological background. 8Efforts are required to elucidate whether the prognostic value of tumor burden is specific to immunotherapy, or if it also applies to TT.
The main aim of the present retrospective study was to evaluate whether the baseline burden of disease measured by CT scan correlates with outcomes in patients with advanced solid tumors receiving experimental TT as part of early-phase clinical trials.Moreover, we aim to validate in a larger cohort our previous finding of the association between baseline tumor burden and outcome in cancer patients treated with novel immunotherapies.
Study Population
We report a single-institution retrospective observational study.We identified all consecutive patients treated within early-phase clinical trials at the New Drugs and Early Drug Development for Innovative Therapies Division of the European Institute of Oncology (Milan, Italy), from January 2014 until April 2021.Data on baseline characteristics, type of therapy, response to treatment, and survival outcomes were collected from patient medical records.The study protocol was approved by the institutional review board and local ethics committee (approval number UID 3560) and was conducted in accordance with the Declaration of Helsinki.
Study Treatments
We included all patients with advanced solid tumors receiving at least one dose of experimental medications within an early-phase trial of immunotherapy or targeted agents.A detailed list of all experimental treatments' targets included, and their categorization is reported in Supplementary Table S1.Treatments were categorized as immunotherapy-based if any immune-oncology agent was included in the regimen, or TT-based if including a targeted agent, with or without chemotherapy.Thus, the regimen including both immunotherapy and TT were considered as immunotherapy-based.In this study, endocrine therapy-based treatments were included among TT.TTs were further divided into biomarker-matched if administered to patients based on the identification of a specific molecular biomarker, or biomarker-unmatched if not requiring any molecular feature.
Imaging Assessments
Baseline imaging assessments including CT scan of the chest, abdomen, and pelvis were performed within 28 days before treatment initiation, as per study protocol.Consistently with prior studies, in this study baseline tumor size (BTS) was used as a metric of baseline burden of cancer.BTS at the time of treatment initiation was calculated according to the Response Evaluation Criteria in Solid Tumor (RECIST) version 1.1, 9 ie, a maximum of 5 lesions and a maximum of 2 per organ.All image assessments were performed by radiologists from the European Institute of Oncology affiliated with the phase I facility.Patients could only be included in the study if having at least 1 RECIST-measurable lesion, at baseline.Patients were divided into 2 subgroups according to the median BTS value: greater than the median as the high group or lower and equal to the median as the low BTS group.
Statistical Analysis
Descriptive statistics were used to present patients and tumor characteristics.Data were presented as relative frequencies (percentage) or median and interquartile range (IQR) for continuous variables.BTS was analyzed as a categorical variable, considering the median and quartiles of the distribution.We investigated differences in terms of tumor objective response rate (ORR) and clinical benefit rates (CBRs) at 6 months using Mantel-Haenszel chi-square tests.Progression-free survival (PFS) was calculated from the first treatment cycle to disease progression or death (event), or last follow-up (censored).Overall survival (OS) was calculated from the first treatment cycle to death (event) or the last follow-up (censored).PFS and OS curves were estimated with the Kaplan-Meier method, and survival distributions were compared using the Log-Rank test.Factors found to be associated with PFS and OS in the univariate analyses were considered for the multivariate models.Multivariate Cox proportional hazard models were used to investigate the independent prognostic role of BTS, adjusting for other significant prognostic factors and confounders.Results are presented as hazard ratios (HRs) with 95% CIs.For all analyses, 2-tailed P < .05 was considered statistically significant.The statistical analyses were performed with R software, version 4.1.1.
Patients Characteristics
Four hundred and forty-four patients were eligible and included in the analysis.The baseline clinical and pathological characteristics of the study population are reported in Table 1.The median age at the time of enrolment was 56 years (48-65 years), 328 (73.9%) patients were female, and the majority had a baseline Eastern Cooperative Oncology Group (ECOG) performance status (PS) of 0 (63%).The most represented tumor types were breast (49%), lung (9%), melanoma (5%), gastric, colorectal, head/neck, and ovarian (4% each) carcinomas (Table 1; Supplementary Table S2).Median number of prior treatment lines for advanced disease was 2 (range: 1-3).Two hundred and twenty patients received an immunotherapybased regimen (49%), 198 received a TT-based regimen (44%), 26 (6%) received an antibody drug-conjugate.TT-based regimens were biomarker-matched in 63% of patients treated with TT, with no significant difference between the two BTS subgroups.Median BTS was 69 mm (IQR 40-100); BTS by density plot and histogram is shown in Supplementary Fig. S1.Higher median BTS was observed in patients with ECOG PS 1 compared to ECOG PS 0 (P = .008).Albumin and lactate dehydrogenase (LDH) values at treatment initiation were available for 54.3% and 77% of the study population respectively.According to LDH, albumin and number of metastatic sites variables, patients were classified into a good prognosis group [Royal Marsden Hospital (RMH) prognostic score 0-1; n = 191] or a poor prognostic group (RMH score 2-3; n = 80).Higher BTS was significantly associated with lower albumin levels (P = .0003),higher LDH levels (P < .0001),higher neutrophil-to-lymphocyte ratio (NLR) (P = .0004),and poorer RMH prognostic score (P < .001)(Table 1).These factors were significantly associated with OS and PFS (Supplementary Table S3).
Impact of BTS on Outcomes in the Overall Population
The median follow-up was 11.7 months (range: 4.1-22.6months).In the overall population, median OS (mOS) was 11.8 months, significantly longer for patients with low versus high BTS (16.6 vs. 8.2 months, P < .001).The 24-months OS was 40% for patients with low BTS compared to 18% for those with BTS > 69 mm (P-value log-rank <.0001).Similarly, PFS was significantly longer for patients with low versus high BTS (3.6 vs. 2 months, P = .0004).BTS was not significantly associated with disease progression at 6 weeks from treatment initiation (17.1% and 24.0% for BTS low and high respectively; P = .10). Figure 1 shows the Kaplan-Meier survival curves of PFS and OS in the overall population according to BTS.The association of BTS with OS remained statistically significant when considering only those patients receiving immunotherapy (P = .005)or those receiving targeted therapy (P < .001;Fig. 2).Even when considering BTS quartiles rather than the median, in the overall population we found an inverse association between mOS and BTS: 15.9 months, 16.6 months, 8.4 months, and 8.1 months with increasing quartiles (P < .001).A similar association was observed for mPFS: 3.9 months, 3.6 months, 2.2 months, and 1.9 months with increasing quartiles of BTS (P = .003).The association of BTS quartiles with OS and PFS remained statistically significant when evaluating patients receiving immunotherapy and targeted therapy as separate groups (Supplementary Fig. S2).
Among eligible patients, 415 were evaluable for response.The ORR was numerically higher among patients with low BTS (25% vs. 17%) however this difference was not statistically significant (P = .06).The CBR was significantly higher in the BTS-low subgroup (58%) compared to patients with high BTS (39%) (P = .0001;Supplementary Table S4).The same association with ORR (P = .06)and CBR (P < .001) was observed when considering BTS quartiles.In the overall population, factors associated with outcomes in the univariate model were tested in the multivariate analysis that confirmed that high BTS was independently associated with shorter OS (HR: 1.77, P < .001)and shorter PFS (HR: 1.27, P = .03),but not with ORR (P = .11;Table 2).When included in the model the RMH score, the association of BTS with OS remained significant (P = .02),whereas the significant association with PFS was lost (Supplementary Table S5).
Impact of BTS on Outcomes in Patients Receiving TT
Among patients receiving experimental TT, mOS was 16.5 months (11.37-20.02months) and those with BTS > 69 mm had significantly shorter OS compared to the patients with lower BTS (mOS: 20.5 vs. 9.9 months, P < .001;Fig. 2A).Similarly, median PFS was significantly lower in patients with higher BTS compared to the BTS-low subgroup (4.7 vs. 3.1 months, P = .002;Supplementary Fig. S3).However, when patients were divided into 2 subgroups based on the type of targeted therapy (biomarker-matched and biomarkerunmatched) BTS was only found to be prognostic among patients receiving biomarker-matched TT.In this subgroup, mOS was 16.5 months and appeared significantly longer for patients with BTS ≤ 69 mm (21.2 months) compared to those with higher BTS (6.7 months, P < .001).Similarly, among these patients, BTS was significantly associated with PFS, with a mPFS of 6.2 versus 3.3 months in the BTS-low versus BTShigh respectively (P = .009).On the contrary, among patients receiving unmatched TT, outcomes did not significantly differ between the 2 BTS subgroups with similar mOS (19.3 vs. 11.8 months, P = .20)and mPFS (3.7 vs. 4.4 months, P = .30).Fig. 3 shows the Kaplan-Meier survival curves of OS and PFS by BTS in patients receiving biomarker-matched or unmatched TT.Among patients receiving TT, significant differences among survival curves were also observed by quartile of BTS for both OS (P = .006)and PFS (P = .002;Supplementary Fig. S2A) however, in a similar manner, this association was significant only for patients treated with biomarker-matched TT (Supplementary Table S6).
When the response rate was evaluated according to BTS in patients receiving TT-based regimens, both ORR and CBR were not found significantly associated with BTS, regardless of biomarker matching (ORR for biomarkermatched, low vs. high BTS: 28% vs. 22%, P = .57;ORR for biomarker-unmatched, low vs. high BTS: 33% vs. 28%, P = .78;Table 3).
A subgroup analysis restricted to patients with metastatic breast cancer treated with TT (N=141) showed a shorter OS in the BTS-high population as compared to the BTS-low (12.6 vs. 19.3months, P = .07)and a significantly lower PFS (2.8 vs. 3.9 months, P = .02).Of note, when considering the type of TT received, the association of BTS with PFS remained significant only among patients receiving biomarker-matched TT (4.7 vs. 2.5 months, P < .001)(Supplementary Table S7).Similar to the overall population, in patients with breast cancer ORR and CBR were not significantly associated with BTS regardless of the type of TT.
Discussion
Our results provide preliminary insights into the prognostic value of CT-assessed baseline disease burden among patients with solid tumors treated with experimental TT.We found a statistically significant association between BTS and treatment outcomes among patients receiving TT in early-phase clinical trials.Of note, a different association between outcomes and BTS was observed when TTs were further divided into biomarker-matched or unmatched based on the selection of patients upon the identification of molecular biomarkers: only among patients receiving biomarker-matched TT BTS was significantly associated with OS and PFS, with longer survival observed among those in the BTS-low subgroup.BTS was not found to be significantly associated with response rate among patients receiving TT, regardless of biomarker selection.Moreover, when considering the whole population enrolled in early-phase clinical trials at our Institution, regardless of treatment received, we found that BTS was an independent predictor of OS and PFS.This study also supported the significant association between BTS and treatment outcome in patients with solid tumors treated with nextgeneration immunoncology agents, with an expanded population compared to what we had included in the prior study. 7n recent years, the development and clinical implementation of multiple targeted drugs have changed the treatment scenario for multiple solid tumors, leading to a dramatic improvement in patients' prognosis. 10However, clinicians still lack the ability to fully predict which patients will most likely achieve this long-term benefit.In advanced cancers, multiple clinical or biological factors have been associated with prognosis and are used to guide oncological treatment such as measures of PS [eg, Karnofsky (KPS) or ECOG], 11 certain types of circulating white blood cells and their respective ratio, 12 or serum biomarkers, such as LDH. 13 The RMH prognostic score predicts survival in patients enrolled in early clinical trials and it includes albumin, LDH, and number of metastatic sites. 14Regarding the latter, tumor burden has been suggested as a useful prognostic factor in patients with metastatic disease; however, there is a relative lack of data on both the definition of tumor burden and its impact on patients' outcomes with different therapies.Although available evidence clearly demonstrates that tumor burden provides relevant prognostic information for patients treated with ICIs, 15 little data is available regarding its impact on patients who underwent treatment with targeted therapies.7][18] To our knowledge, ours is the first study aimed at evaluating the prognostic value of the baseline burden of disease among patients with different solid tumors treated with experimental TT.We demonstrated an association between BTS and outcome only among patients receiving biomarker-matched TT.These findings, apparently in contrast to those available in renal carcinoma, require further validation as this type of cancer is greatly underrepresented in our population.Moreover, the importance of the mechanistic target of rapamycin (mTOR) and angiogenesis in the biology of renal cell carcinoma may be associated with the different outcomes with unmatched-TT. 19,20ne of the most interesting findings of our analysis was that the PFS and OS improvement observed in the low-BTS subgroup was not associated with an increase in ORR when compared to patients with BTS over the median.An explanation for this phenomenon could be found in intratumor heterogeneity 21 : despite the similar activity of TT in terms of response rate, higher tumor volume could be associated with higher heterogeneity that may promote tumor adaptation and treatment failure through a selection of preexisting drug-resistant clones. 22For those patients with a high burden of disease who can tolerate an escalation of treatment, a more intensive approach such as combination treatment, could be considered to overcome the occurrence of resistance.Another potential implication of our findings is in the conduct of earlyphase clinical trials testing TT, as BTS could be useful to improve patients' selection.The enrolled population may be enriched not only based on molecular biomarker selection but also on imaging biomarkers such as BTS.Finally, the association between baseline tumor size and outcomes also raises the question of whether locoregional treatments aimed at reducing tumor burden could increase the benefit of systemic therapy and thus improve survival.So far, conflicting evidence has accumulated on the role of metastases-directed treatment in addition to systemic therapy in different tumor types.In oligometastatic disease, while a role of local ablative therapy approaches has been established in patients with metastatic colorectal cancer, 23 no impact on survival has been demonstrated in patients with metastatic breast cancer. 24n our study, the baseline burden of disease was assessed through CT scan and BTS was defined as per RECIST 1.1 criteria. 9However, not all metastatic lesions are suitable for CT-based measurement, and these are recognized in the RECIST guidelines as "non-target" lesions, including bone lesions without identifiable soft-tissue components, metastatic effusions, or lesions < 10 mm in diameter.Moreover, the RECIST-based definition of tumor burden fails to differentiate the presence of multiple small metastases from a single large metastatic lesion, 2 clinical settings in which tumor biology is probably different.In addition, RECIST assessment does not consider the site of metastasis which could strongly affect the prognosis based on the organ involved as well as be associated with different responses to treatment.Finally, the selection of the 5 target lesions may also be subjective reflecting not only their size but also how well the lesions are delineated on CT scan allowing reproducible repeated measurement.Thus, doubts may arise regarding CT-based assessment of tumor burden since this could not be sufficient to dissect the complexity of metastatic disease.Other methods have been proposed to improve the definition of tumor burden such as 2-Deoxy-2-[18F]fluoro-D-glucose ([18F]FDG)-PET and liquid biopsy.Among the parameters obtained by FDG-PET/ CT, tumor burden can be defined through the total metabolic tumor volume (tMTV), as the sum of all FDG-avid lesions. 25his might represent a better marker for tumor burden than BTS allowing a whole-body examination and inclusion of lesions normally excluded from CT-based BTS analysis such as bone lesions.A recently published study demonstrated a significant association between tMTV and OS among patients with NSCLC receiving pembrolizumab while no difference was found in the group of patients treated with epidermal growth factor receptor (EGFR) inhibitors for EGFR-mutated NSCLC (n = 40). 26Further studies are required to fully elucidate the role of PET scan in the definition of tumor burden and its impact on patients treated with TT.However, a systematic evaluation of the correlation between tMTV and prognosis is limited by the less consistent use of PET scan in clinical practice and the absence of PET assessments in the majority of clinical trials.Another attractive research area for the definition of tumor burden is liquid biopsy.Evidence supporting a role of circulating tumor cells (CTCs) [27][28][29] and circulating tumor DNA (ctDNA) 30 as a surrogate marker for tumor burden is currently growing.However, especially for CTCs difficulty in the isolation process and the requirement of specific expertise have limited their incorporation into clinical practice.Therefore, also considering the limitations of CT-based assessment, measuring the size of tumors with radiological imaging remains the simplest way to estimate tumor burden and possibly predict outcomes in routine clinical practice at most cancer centers worldwide.Indeed, contrast-enhanced CT imaging is currently performed for most patients with advanced cancer and may be used not only in tumor diagnosis and staging but also as a prognostic tool.
A strength of our study is that, by evaluating patients included in clinical trials, there is homogeneity regarding the timing of the CT scan relative to treatment initiation, given the typical time window of up to 4 weeks allowed in clinical trials.In addition, all imaging assessments were performed by radiologists from the European Institute of Oncology affiliated with the phase I facility.There are some relevant limitations of this work to be pointed out.First, the retrospective and single institution nature of our analysis may represent a source of bias.Second, the population treated with TT is relatively small, especially in the biomarker-unmatched subgroup.Moreover, the cutoff of 69 mm for defining patients with high and low BTS was not pre-specified but was adjusted according to the median in our population.Finally, the population included in this analysis is highly heterogeneous in terms of tumor type, with a predominance of patients with breast cancer, as well as in terms of treatment received.All these limitations make the results of this work hypothesis generating rather than conclusive, requiring validation in an independent cohort.
Several aspects should be assessed and clarified by future research.We used the median BTS value of our population to distinguish between high and low baseline tumor burden however, considering the continuous relationship between BTS and risk of death, further studies should be designed to establish a universal definition of high tumor burden with validated cutoff for BTS, which might be different for different tumor types.Furthermore, future studies should address the question of whether the value of BTS is different among patients with the same tumor burden but a different distribution of metastases such as specific organ involvement or the presence of single large metastasis versus multiple smaller metastases.Finally, whether the role of BTS would be confirmed, future research should investigate the possibility to use BTS to escalate or de-escalate treatment in patients with advanced solid tumors receiving TT.
Conclusion
In summary, in this retrospective study, we found a significant association between BTS and outcomes among patients with advanced solid tumors receiving experimental TT in early-phase clinical trials only when treatment was based on target selection.Although independent validation of this finding is necessary, we hypothesize that BTS could represent an accessible and promising independent prognostic factor in clinical practice to select those patients with poorer prognosis that could benefit from treatment intensification.Moreover, BTS could be taken into account, among other baseline factors, to stratify patients in future clinical trials involving TT.
Figure 1 .
Figure 1.Kaplan-Meier analysis of progression-free survival (N = 417) and overall survival (N = 416) by baseline tumor size (BTS) in the overall population.
Figure 3 .
Figure 3. Kaplan-Meier analysis of overall survival and progression-free survival according to baseline tumor size (BTS) in patients receiving biomarkermatched (A, N = 112 and B, N = 113) or unmatched (C, N = 71 and D, N = 71) targeted therapy (TT).
Table 2 .
Multivariate cox regression model for overall survival, progression-free survival, objective response rate, and clinical benefit rate Abbreviations: IO, immunotherapy; NLR, neutrophil/lymphocytes ratio; PS: performance status; TT, target therapy.
Table 3 .
Objective response rate (ORR) and clinical benefit rate (CBR) by median baseline tumor sum (BTS) in patients receiving TT Biomarker- | 2023-08-08T06:17:41.297Z | 2023-08-07T00:00:00.000 | {
"year": 2023,
"sha1": "e1e603c9fbf5fcfb0b97371a4a43ad237d822755",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/oncolo/advance-article-pdf/doi/10.1093/oncolo/oyad212/51047446/oyad212.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3cd68338212b9386a75e6dc16af2c02a8c0db14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244642948 | pes2o/s2orc | v3-fos-license | Barriers to Trade in Financial and Insurance Services: Evidence from the United Kingdom
Distance, as a proxy for trade barries, is found in many studies to matter even for weightless cross-border financial investments and lending, possibly due to the presence of information asymmetries. Its importance is tested in this paper using exports of all five broad categories of the U.K.’s financial and insurance services. No trade barriers are found for the bulk of the U.K.’s exports. Trade barriers are confirmed only for interest-bearing activities – being in line with available results in the literature. The positive effect of EU membership appears to be small. Notwistanding the uncertainties, it suggests that post-Brexit disruptions of the U.K.’s export of financial and insurance services may be minor. and suggestions. IMF Working Papers describe research in progress by the author and are published to elicit comments and to encourage debate. The views expressed in IMF Working Papers are those of the author and do not necessarily represent the views of the IMF, its Executive Board, or IMF management. Dependent variable in logarithm, ln(1+y). Censored regression.
I. INTRODUCTION
This paper studies the role of trade barriers in the export of a range of financial and insurance services. 2 It applies a gravity model, where distance is a proxy for trade barriers, to the case of the U.K. The U.K. is the largest net exporter of financial and insurance services in the world and a global leader in cross-border provision of these services, exporting to more than 64 countries across all continents. The U.K.'s financial and insurance services account for close to 30 percent of the U.K.'s export of all services.
Previous studies analyzed international transactions in financial assets, such as equity flows, securities holdings, and retail bank lending, using a gravity equation and found distancerepresenting barriers to trade-to be an important factor. The seminal paper by Portes and Rey (2005) found that a gravity model explains international equity flows at least as well as trade in goods. They called it a "distance puzzle", since financial services are weightless (do not involve a physical delivery) and suggested that transaction costs may stem from greater information asymmetries for more distant countries. Transaction costs in financial services may indirectly reflect those in trade in goods if trade in financial services is strongly associated with trade in goods. Aviat and Coeurdacier (2007) explored the complementarity between bilateral trade in goods and bilateral asset holdings in a simultaneous gravity equations framework. Their results suggest that bilateral equity investments are indeed strongly correlated with the underlying patterns of trade in goods. Lane and Milesi-Ferretti (2008) studied the endogeneity between trade in goods and the holding of securities and found transaction costs (distance) to be the common underlying determinant. They explained it by barriers to international trade and information asymmetries. And finally, Heuchemer et al. (2009) analyzed cross-border retail bank lending in a gravity framework for the Eurozone and found a significant role of the physical distance. They partly attributed it to regulatory and cultural differences across countries.
All the cited studies above focused on international transactions in financial assets rather than export of financial and insurance services. However, international trade in financial services encompasses much more than interest revenues from cross-border investments and lending (stemming from financial asset holdings). Cross-border financial services are dominated by OTC derivatives (interest and currency) and spot currency trading. According to the Bank for International Settlements (2020, 2021), Schrimpf and Sushko (2019) and the World Bank (2021), in 2019-20, the total outstanding global equity and cross-border loans together amounted to $135 trillion, while the notional amount of global OTC derivatives reached $607 trillion and spot currency trading was close to $480 trillion a year. Such proportions are also found in the breakdown of export in the case of the U.K.: revenues from interest (financial intermediation services indirectly measured-FISIM) represent only 11 percent of total financial and insurance revenues (see Figure 1). Financial services explicitly charged (commissions and fees) represent the largest part, followed by direct insurance. The representativeness of the U.K. for the global trade and the advanced collection of bilateral trade data makes the U.K. a great case to learn from.
The main contribution of this paper consists of broadening the analysis beyond the usual interest-bearing asset holdings by analyzing the export of all types of traded financial and insurance services. Besides loans, deposits, and securities (measured by FISIM), the analysis in this paper includes trading in exchanges and derivatives (commissions and fees), insurance and reinsurance (premiums), and auxiliary insurance services.
The gravity equation explains the U.K.'s export of five types of financial and insurance services by commonly considered variables, including the market size, distance, and a range of political and cultural similarities. The U.K.'s export values are reported by a large number of countries and there is a reminder for each subregion. To maximally utilize the information contained in the collected data, the gravity equation is estimated using a censored normal regression model, accounting for censoring of observations at different thresholds. Lane and Milesi-Ferretti (2008) also applied a Tobit-type model to cross-border equity holdings, where a large number of observations were censored at zero. A robustness analysis considers how results change when reducing the sample only to countries that are explicitly identified in the U.K.'s export statistics and estimating through least squares.
The main finding is that trade barriers are not significant for most types of U.K.'s financial and insurance exports. Transaction costs seem to matter only for interest-bearing activities (FISIM), which confirms previous findings in the literature that transaction costs due to asymmetric information (distance) are significant for cross-border bank lending and equity investments. However, trade barriers (distance) are not significant for the largest part of the U.K.'s financial services export, that is, the commissions-and-fees-based (explicitly charged) financial services. This may not necessarily come as a surprise since most of the financial services that involve commissions and fees (trading currencies, derivatives, stocks, and issuance and trading of bonds) are performed in the U.K. (the City) on behalf of clients in other countries. Consequently, there are no significant barriers to trade involved as clients from different countries meet at the London market.
Trade barriers are found to be also insignificant for all three types of exported insurance. Perhaps, the usually very detailed nature of contracts for direct insurance (life and non-life), that guide the processes of settling claims (auxiliary insurance) leaves little room for crossborder asymmetric information or hidden transaction costs for the U.K.'s insurance exports. In addition, the re-insurance market is, by definition, global, driven by diversification risks and hence there one may expect little cross-border transaction costs.
There are three other findings worth mentioning. First, the benefit of the U.K.'s passporting rights to the EU (the ability to serve EU clients from the U.K.-based firms without further authorization by other EU member countries) turns out to be significant, albeit small. It represents an additional boost in the range of one to two percent of the U.K.'s export of financial services to the EU but no effect for all types of insurance export. Second, a country's use of English as an official language boosts U.K.'s export of all financial and insurance services to that country. And third, countries' scores in the rule of law measure appear to be positively correlated with the U.K.'s export of financial services. However, the results are statistically insignificant for the U.K.'s export of insurance services, which is likely due to the very detailed, standardized character of typical insurance contracts, making them independent from the quality of local laws.
Section II details the methodology, including the specification of the gravity equation and estimation technique, while Section III describes the data used in the estimation and some stylized fatcs. Sections IV and V contain results and Section VI concludes.
II. METHODOLOGY
The analysis of export of financial and insurance services uses the standard gravity equation specification, as is common in the literature. The gravity equation (market size and distance) was employed in the studies of determinants of bilateral trade (Tinbergen, 1962 andPöyhönen, 1963) even before it received theoretical foundations by Anderson (1979). The bilateral trade flows are typically explained by size, distance, and some measures for relative similarity of countries' size and development, sharing common border, and other cultural (e.g., common language) and political similarities (see Baltagi et al., 2003). The actual choices of variables representing the size and similarity usually vary depending on what kind of cross-border flows are being investigated.
Therefore, the specification of the gravity equation for the U.K.'s export of financial and insurance services includes distance and market size and variables representing cultural and political similarities: where the dependent variable EX i denotes the export of financial or insurance services from the U.K. to the trading partner i. Log(1+EX i ) is computed in order not to eliminate occasional observations with zero export. This transformation affects only the size of the intercept.
The explanatory variables are as follows: ©International Monetary Fund. Not for Redistribution • The size of the market is measured by population (POP i ) and income per capita at PPP (GDPPC i ) in the trading partner i (used by Kimura and Lee, 2006). The expectation is that a larger market attracts more trade.
• The trading costs (general barriers to trade and information asymmetries, including hidden costs) are proxied by the distance (DIST i ), which is measured in kilometers from London to the capital cities of the U.K.'s trading partners (as in Heuchemer et al., 2009). The common finding is that a larger distance entails costs that reduce trade.
• Cultural similarities are proxied by the use of English as a common official language in trading partner countries. The EOL i is a dummy variable that equals one if trading partner country i uses English as an official language and zero otherwise. The choice of English as an official language is advantageous since it encompasses a combination of factors, namely the ease of communication between nations that share common language and common law (former British colonies typically continue to use common law and English as an official language), which increases their bilateral trade, see Rauch (1999).
• Political similarities are represented by the rule of law score (ROL i ). Anderson and Marcouiller (1999) showed that hidden transaction costs in the form of contract enforcement reduce trade. The ROL i is a governance indicator compiled by the World Bank and a weaker score would be expected to reduce trade due to lower institutional quality and confidence (used by Heuchemer et al., 2009). 3 • Further to the political similarities, various agreed partnerships may benefit the U.K.'s export of financial and insurance services. Similar to Aviat and Coeurdacier (2007), I consider the role of major trade partnerships, namely, the EU i membership, AA i -the association agreement, and EPA i -the economic partnership. Partnership dummies take the value 1 if the trading partner is in that particular partnership and zero otherwise. In the case of the EU, it is expected to primarily measure the effects of passporting rights, which make provision of cross-border financial services easier within the EU. The effects from AA and EPA are likely less direct, intermediated through enhanced cooperation through cross-border trade and finance.
The error term is assumed to be independent and identically distributed, according to The U.K. exports financial and insurance services to large number of countries. However, only 64 countries are explicitly identified, while the rest is reported as a residual by geographical region that is not directly attributable to any particular country. Therefore, the dependent variable contains the actual U.K.' exports for explicitly identified trading partners and observations for each of the rest of countries are the U.K.'s export values reported for the rest of the region to which the country belongs. This makes the dependent variable partly continuous and censored at different values and requires an estimation technique that accounts for this feature.
The underlying, partially continuous, export values are modeled as follows: When the true export value is not known, only the highest possible value is reported (censored). Each censored observation is therefore considered separately using the censoring indicator I that indicates whether a particular observation is the actual (I = 0) or is censored from the left (I = -1), meaning that the unobserved underlying value is smaller or equal to the one that is observed. This model is an extension of the censored normal regression model first introduced by Tobin (1958), by allowing for observation-by-observation censoring. It is estimated using a censored normal regression estimator.
III. DATA DESCRIPTION
The export statistics by geography are collected by the U.K.'s Office for National Statistics (ONS). They consist of the value (revenue) of the U.K.'s export to many specific countries and a reminder for each geographical region. The reporting system does not allow one to break down the remainder, according to the written responses by ONS staff, since it is calculated as a residual in each region.
This paper uses annual data for 2016. It constitutes the most complete data set and is sufficiently old vintage to be considered final. More recent data are too preliminary and incomplete to be considered reliable. Trade data is generally subject to several rounds of annual revisions as preliminary data are being updated with delayed reporting and reconciled with other statistical submissions. 4 Financial services are broken down by type of revenues into explicitly charged and FISIM. Explicitly charged financial services typically include commissions and fees related to issuance, trading, and clearing, while the FISIM is received from interest-bearing activities such as cross-border loans and investments. Revenues from insurance services include premia for direct insurance (life and non-life), reinsurance, and payments for auxiliary insurance services, such as risk assessment, claim statements, survey of claims, and loss statements. Table 1 shows the geographical distribution of export of financial and insurance services. The largest market is Europe and the Americas, accounting for 50 and 30 percent of total export of financial and insurance services, respectively. However, relative to the total export of U.K. services, financial and insurance services export is nearly equally distributed across all continents. Table 2 shows that the U.K.'s export of financial and insurance services accounts for 28.3 percent of the total U.K.'s services exports. Financial services explicitly charged make up the largest share. The U.K. financial sector exports to more than 64 countries. Although the 64 explicitly identified countries represent the bulk, smaller export markets represent still sizable 12 and 13 percent of all exported insurance and financial services, respectively.
The sample includes 183 export destination countries across all continents, for which the IMF and the World Bank collect data. It includes all 64 countries that the U.K. explicitly reports exporting to and most of the remaining countries in each region. It represents a great variety of countries in terms of income per capita, population, distance, and quality of the rule of law (Table 3). In the sample, a third of countries use English as an official language; 14 percent of countries are members of the EU, 11 percent have signed an association agreement, and 17 percent entered in a formal economic partnership.
IV. RESULTS
The estimation results contain two specifications: (1) the basic specification that includes only income per capita, population, and general trade barriers-proxied by distance; and (2) the full specification as described in Section II., that is, including also measures of political and cultural similarities.
A. Financial Services
The U.K.'s export of financial services is proportionally positively related to the size of the export market. In case of the export destination country population, the U.K.'s export elasticity is unitary, meaning that a percentage growth in a country's population increases the U.K.s' export of financial services to that country by one percent. The elasticity to income per capita is even bigger, close to three. These results are stable across both specifications (basic and full) and types of exported financial services (Table 4).
The general trade barriers appear significant in the basic specification (columns 1, 3, and 5 in Table 4) across the types of financial services. Nevertheless, after controlling for the usual political and cultural similarities (column 2, 4, and 6), general trade barriers (distance) continue to be significant only in the case of FISIM, that is, for the interest-bearing activities. This confirms previous findings of significant trade barriers in the case of cross-border lending (Heuchemer et al., 2009) and equity flows (Lane and Milesi-Ferretti, 2008). Both, cross-border bank lending and equity investments are arguably activities that are most affected by transaction costs stemming from asymmetric information (in screening and loan recovery).
In the case of the financial services explicitly charged-the bulk of exported financial services by the U.K., general trade barriers do not seem to be significant after accounting for the common language, partnership agreements, and the rule of law (column 2, 4, and 6). This is a significant new finding since previous studies did not investigate this export segment despite its importance. It suggests that cross-border provision of explicitly charged services, i.e., commissions and fees from the issuance of debt instruments, trading and clearing assets and derivatives, are not hampered by barriers to trade, beyond usually considered political and cultural similarities.
However, cultural and political similarities do indeed matter: • Using English as an official language in a country increases the U.K.'s export of financial services to that country by one percent, on average. It is equally important for the U.K.'s export of all types of financial services.
• A better score in the World Bank WGI's rule of law in a country tends to increase the U.K.'s export of all types of financial services to that country by up to 2.4 percent. 5 • Formal associations and economic partnerships agreements also help boost export of explicitly charged financial services. An economic partnership agreement increases the U.K.'s export of these services by 1.7 percent, while the association agreement yields additional 1.1 percent boost to U.K.'s export. The UK's EU membership (including benefits of passporting rights) brings close to one percent of additional U.K.'s export of explicitly charged financial services to the EU. The relatively small size of the EU membership effect is perhaps due to the attractiveness of the UK market (trading and clearing currencies and derivatives-and associated netting benefits, see Benos et al., 2019, andChoi et al., 2021) for EU countries regardless of the UK's EU membership. In case of FISIM export, these trade agreements do not seem to yield any significant export boost.
According to the models' fit, the full specification explains better the export of financial services than the basic specification and is thus the preferred model.
Although not directly comparable, most estimated elasticities fall in the ballpark of those in the literature. The estimated elasticity of export of financial services to distance of -0.4 falls within the range of available estimates (from -0.9 to -0.3), using equity flows, services exports, cross-border loans in Portes and Rey (2005), Kimura and Lee (2006), Heuchemer et al. (2009), respectively. Similarly, the near unitary elasticity of the export in financial services to population has been found also in Kimura and Lee (2006) using overall services export. The nearly unitary effect of common language on export of financial services is on the upper side of the range of 0.4 to 0.9 found in Kimura and Lee (2006) and Lane and Milesi-Ferretti (2008) for services exports and equity flows, respectively.
Nevertheless, the effect of GDP per capita at PPP between 2.3 and 3 is higher than the range of 0.8 to 1.3 found in the literature on equity flows, cross-border lending, and export of services, using various, different measures of market size (market capitalization, GDP, or trade). It may be reconciled by the greater size of equity flows, asset holdings, and crossborder lending (hence lower elasticity to income) relative to the interest revenue flows (hence higher elasticity to income).
B. Insurance and Reinsurance Services
Results for the U.K.'s export of insurance and reinsurance services are shown in Table 5. The full specification outperforms the basic one in terms of fit for all types of insurance services and thus it is the preferred model.
General barriers to the cross-border provision of insurance services, including political and culural, are not significant across all types of insurance services. The distance results are not significant either in the basic specification (columns 7, 9, 11, and 13) or in the full specification (columns 8, 10, 12, and 14). This may be driven by the fact that insurance and reinsurance, including the settlements of claims, are based on very detailed contracts (in case of reinsurance, standardized across countries) that leave very little room for ambiguity that is often the base for hidden transaction costs. In adidtion, since reinsurance primarily serves the purpose of geographical diversification of risk, it would be expected to be affected less by hidden trade barriers (transaction costs).
Similar to the export of financial services, the market size is also important for the U.K.'s export of insurance services. The elasticities of the U.K.'s direct insurance export to an export country's population and income are similar to those of financial services. They are much smaller (about half) in the case of reinsuarce and auxiliary insurance services. The latter findings are rarther intuitive as reinsurance export is driven more by geographical diversification of risks than particular country's income (showing the lowest elasticity to income out of the three types of insurance services), while auxiliary insurance services (the settlement of claims) are derived from both, the direct insurance and reinsurance, hence the elasticity falls between those of the other two.
Cultural similarities, as represented by the use of English as an official language, boost the U.K.'s export of insurance services. The U.K.'s export of direct insurance and auxillary insurance services is higher by close to one percent to countries that list English among official languages, while the reinsurance export benefits by a half of a percentage point.
Political similarities appear to matter much less for insurance services than financial services. The countries' scores in the rule of law do not result as a significant factor across all types of insurance services. This contrasts with the findings for financial services and perhaps, it is again driven by the character of insurance services: namely that they are based on wellspecified standardized contracts and in case of reinsurance, internationally enforceable. The EU membership does not seem to matter for the cross-border provision of the U.K.'s insurance services. The association agreements are the only one that significantly increase the U.K.'s export of direct and auxiliary insurance services, but not reinsurance.
V. ROBUSTNESS ANALYSIS
The robustness analysis shows how the results would change if only countries that are explicitly identified in export statistics were used for the estimation of the gravity equation. The sample is much smaller, as it includes only 64 countries. The fully specified model is estimated using ordinary least squares and results are reported in Table 6.
A. Financial Services
The results on the smaller sample shadow those found using the full sample. They confirm the major finding that general trade barriers exist only in the export of FISIM-related financial services. Across all types of financial services, results also remain unchanged for: • The market size: elasticities on income and population remain broadly unchanged, at one and three, respectively, compared to the full sample regressions; and • The importance of English as an official language: It boosts the U.K.'s export by about a percentage point.
The major differences are in the significance of political similarities in the case of explicitly charged financial services. The EU dummy and the rule of law score are no longer statistically significant. This is likely due to the missing counterfactual of less developed countries, when focusing only on countries that are explicitly reported. In the restricted sample, nearly half of the countries are members of the EU and most countries are developed countries with very comparable rule of law scores.
B. Insurance and Reinsurance Services
All results for direct insurance and auxiliary insurance services are robust to reducing the sample to only countries that are explicitly identified in export statistics. The only difference is that the use of English as the official language lost its statistical significance for reinsurance services. The relatively weak significance of the dummy variable "English as an official language" in the full sample and the insignificance in the smaller sample for reinsurance may stem from the fact that reinsurance is a cross-border business by nature and less dependent on counterparts speaking English as on official language.
The barriers to trade and the benefit from the EU membership continue to be insignificant across all types of insurance services in the reduced sample. Among the bilateral agreements considered, only the association agreement brings significant boost to the U.K.'s export of direct and auxiliary insurance services. The U.K. exports more direct and auxiliary insurance services to countries that speak English as an official language.
VI. CONCLUSION
This paper brings new and richer insights into trade barriers in financial and insurance services. It analyzes export data for the U.K. -the global leader in cross-border services provision -for all types of financial and insurance services using the gravity equation. It significantly broadens the analysis of trade barriers beyond the cross-border equity investments and bank lending that have been analyzed in the literature so far, which, however, represent only a small fraction of the overall export of financial and insurance services. It brings new insights by providing more granular information on trade barriers in each market segment.
The U.K. is the largest net exporter of combined financial and insurance services. It exports wide range of financial and insurance products to more than 64 countries across the globe. The U.K.'s export (revenue) proportions across all types of financial services closely match the importance of each asset classes globally. These aspects make the U.K. a very representative country for the global trade in financial and insurance services and findings on the U.K. data may be of a broader relevance.
The findings suggest that general barriers to trade are restricted to a one type of financial service, that is, the interest-bearing activities, which in addition represents only a small fraction of trade in financial and insurance services. Therefore, the bulk of trade in financial and insurance services, including trading derivatives, currencies, issuance of debt securities, and all types of insurance services are not subject to barriers to trade. These findings may help to reconcile the apparent contradiction between the "popular view of intense and widespread financial globalization" and findings of barrier to trade in financial services (Aviat and Coeurdacier, 2007).
And finally, the U.K.'s export markets' income, English as the official language, trade agreements, as well as the countries' score in the rule of law are found to matter for the U.K.'s export of financial and insurance services. However, the effects on boosting exports are small, in the range of one to two percentage points.
These findings, based on a cross-section data in the year of the Brexit vote, further suggest that the U.K. benefited little from the EU membership in terms of additional boost in export of financial and insurance services to the EU. This is likely due to benefits the U.K. market offers to EU countries regardless of the U.K.'s EU membership (such as netting benefits due to the large collateral pool for the CCPs). 6 Based on the estimates, Brexit may be expected to reduce the U.K.'s export level of financial services to the EU (about 40 percent of total U.K. export of financial services) by only 0.82 percent (one-off decline) and no impact on insurance exports. This translates in 0.33 percent one-off decline in overall U.K.'s export level of financial and insurance services due to Brexit. It is expected that economic partnerships of the U.K. with third countries, closed under the EU, will continue post Brexit.
The U.K. has on-shored most of the equivalence under the EU, except for CCPs, which operate under the Temporary Recognition Regime. If discontinued, it would have an additional one-off effect of 0.2 percent decline of the U.K.'s export level of financial and insurance services.
Since the Brexit vote (2016-2020), the share of the U.K.'s export of financial and insurance services to the EU in total U.K.'s export of financial and insurance services actually increased by one percentage point, from 39 to 40 percent for financial services and from 15 to 16 percent for insurance services. Nevertheless, there continues to be considerable uncertainty around the full impact of Brexit from future regulatory and legal developments, including the EU decision on equivalence for U.K.'s CCPs (only a temporary equivalence has been granted by the EU), and methodological challenges, such as the lack of the time dimension in the analysis and difficulties with establishing an appropriate counterfactual. | 2021-11-26T16:05:57.130Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "a5185b58a152ed29b6ca9fff2d28883d93c9d71c",
"oa_license": null,
"oa_url": "https://www.elibrary.imf.org/downloadpdf/journals/001/2021/260/001.2021.issue-260-en.xml",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "43994a7378b8faf754d4da9350ba901c1d076753",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
220057272 | pes2o/s2orc | v3-fos-license | CASIA’s System for IWSLT 2020 Open Domain Translation
This paper describes the CASIA’s system for the IWSLT 2020 open domain translation task. This year we participate in both Chinese→Japanese and Japanese→Chinese translation tasks. Our system is neural machine translation system based on Transformer model. We augment the training data with knowledge distillation and back translation to improve the translation performance. Domain data classification and weighted domain model ensemble are introduced to generate the final translation result. We compare and analyze the performance on development data with different model settings and different data processing techniques.
Introduction
Neural machine translation(NMT) has been introduced and made great success during the past few years (Sutskever et al., 2014;Bahdanau et al., 2015;Luong et al., 2015;Wu et al., 2016;Gehring et al., 2017;Zhou et al., 2017;Vaswani et al., 2017). Among those different neural network architectures, the Transformer, which is based on self-attention mechanism, has further improved the translation quality due to the ability of feature extraction and word sense disambiguation (Tang et al., 2018a,b). In this paper, we describe our Transformer based neural machine translation system submitted to the IWSLT 2020 Chinese→Japanese and Japanese→Chinese open domain translation task (Ansari et al., 2020).
Our system is built upon Transformer neural machine translation architecture. We also adopt Relative Position (Shaw et al., 2018) and Dynamic Convolutions (Wu et al., 2019) to investigate the performances of advanced model variations. For the implementation, we extend the latest release of Fairseq 1 (Ott et al., 2019). 1 https://github.com/pytorch/fairseq For data pre-processing, we use byte-pair encoding(BPE) segmentation (Sennrich et al., 2016b) for the source side and character level segmentation for the target side to improve the model performance on rare words. We also investigate the influence of different segmentation methods including word, BPE and character segmentation for both sides.
To further improve the translation quality, we utilize data augmentation techniques of backtranslation with a sub-selected monolingual corpus to build additional pseudo parallel training data. Sentence level knowledge distillation is used to strengthen the performance of student model from multi-policy teacher models including left→right, right→left, source→target and target→source.
We also investigate the domain information of the large training data by using a Bert based domain classifier, which is a masked language model and has been shown effective in large scale text classification tasks (Devlin et al., 2019). With the in-domain data, we transfer the model of general domain to each specific domain, and use weighted domain model ensemble as decoding strategy. Figure 1 depicts the whole process of our submission system, in which we pre-process the provided data and train our advanced Transformer models on the bilingual data together with synthetic corpora from back-translation and knowledge distillation. With domain classification and fine tuning techniques, we obtain multiple models for ensemble strategy and post-processing. In this section, we will introduce each process step in detail.
NMT Baseline
In this work, we build our model based on the powerful Transformer (Vaswani et al., 2017). The Transformer is a sequence-to-sequence neural Figure 2: The Transformer model. model that consists of two components: the encoder and the decoder, as shown in Figure 2. The encoder network transforms an input sequence of symbols into a sequence of continues representations. The decoder, on the other hand, produces the target word sequence by predicting the words using a combination of the previously predicted word and relevant parts of the input sequence representations. Particularly, relying entirely on the multi-head attention mechanism, the Transformer with beam search algorithm achieves the state-ofthe-art results for machine translation.
Multi-Head Attention
We use the multi-head attention with h heads, which allow the model to jointly attend to information from different representation subspaces at different positions. Formally, multi-head attention first obtains h different representations of (Q i , K i , V i ). Specifically, for each attention head i, we project the hidden state matrix into distinct query, key and value represen- Then we perform scaled dot-product attention for each representation, concatenate the results, and project the concatenation with a feedforward layer.
Scaled Dot-Product Attention An attention function can be described as a mapping from a query and a set of key-value pairs to an output. Specifically, we can multiply query Q i by key K i to obtain an attention weight matrix, which is then multiplied by value V i for each token to obtain the self-attention token representation. As shown in Figure 3, we compute the matrix of outputs as: where d k is the dimension of the key. For the sake of brevity, we refer the reader to Vaswani et al. (2017) for more details.
Back-Translation
Back-translation is an effective and commonly used data augmentation technique to incorporate monolingual data into a translation system (Sennrich et al., 2016a;Zhang and Zong, 2016). Especially for low-resource language tasks, it is indispensable to augment the training data by mixing the pseudo corpus with the parallel part. Back-translation first trains an intermediate target-to-source system that is used to translate monolingual target data into additional synthetic parallel data. This data is used in conjunction with human translated bitext data to train the desired source-to-target system. How to select the appropriate sentences from the abundant monolingual data is a crucial issue due to the limitation of equipment and huge overhead time. We trained a n-gram based language model on the target side of bilingual data to score the monolingual sentences for each translation direction.
Recent work (Edunov et al., 2018) has shown that different methods of generating pseudo corpus made discrepant influence on translation performance. Edunov et al. (2018) indicated that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam search or greedy search. We adopt the back-translation script from fairseq 2 and generate back-translated data with sampling for both translation directions.
Knowledge Distillation
The goal of knowledge distillation is to deliver a student model that matches the accuracy of a teacher model (Kim and Rush, 2016). Prior work demonstrates that student model can surpass the accuracy of the teacher model. In our experiments, we adopt sequence-level knowledge distillation method and investigate four different teacher models to boost the translation quality of student model. S2T+L2R Teacher Model: We translate the source sentences of the parallel data into target language using our source-to-target (briefly, S2T) system described in Section 2.1 with left-to-right (briefly, L2R) manner.
S2T+R2L Teacher Model: We translate the source sentences of the parallel data into target language using our S2T system with right-to-left (briefly, R2L) manner.
T2S+L2R Teacher Model:
We translate the target sentences of the parallel data into source language using our target-to-source (briefly, T2S) system with L2R manner.
T2S+R2L Teacher Model: We translate the target sentences of the parallel data into source language using our T2S system with R2L manner.
In the final stage, we use the combination of the translated pseudo corpus to improve the student model. It is worth noting that we also mix the original bilingual sentences into these pseudo training corpus.
Model Ensemble and Reranking
Model ensemble is a method to integrate the probability distributions of multiple models before predicting next target word (Liu et al., 2018). We average the last 20 checkpoints for single model to avoid overfitting. One checkpoint is saved per 1000 steps. For model ensemble, we train six separate models. To achieve this, we fine-tune our student model described in Section 2.3 and back translation model described in Section 2.2 using corpus from three different domains (Spoken domain, Wiki domain and News domain). We use weighted ensemble to generate the translation result, in which the weights for each domain model is calculated from a Bert based domain classifier. The domain specific data for training the domain classifier and fine tuning the student translation model will be described in detail in Section 3.4.
For reranking, we rescore 50-best lists output from the ensemble model using a rescoring model, which includes the models we trained with different model sizes, different corpus portions and different token granularities.
Data Preparation
This section introduces the methods we employ to prepare the provided parallel data (18.9M web crawled corpus and 1.9M existing parallel sources) and monolingual sentences (unaligned web crawled data). We also describe how to prepare domain specific data to facilitate translation.
The provided parallel corpus existing parallel for the two translation directions consists of around 1.9M sentence pairs with around 33.5M characters (Chinese side) in total. Furthermore, a large, noisy set of Japanese-Chinese segment pairs built from web data web crawled is also provided, which con-sists of around 18.9M sentence pairs with around 493.9M characters (Chinese side) in total. We use the provided development dataset as the validation set during training, which consists of 5,304 sentence pairs. The average length and length ratio of the provided parallel corpus and development dataset is shown as in Table 1.
Pre-processing and Post-processing
In the open domain translation task both on Chinese→Japanese and Japanese→Chinese translation directions, we first implement pre-processing on training corpus and then filter it.
Before pre-processing, We remove illegal sentences in the provided Japanese-Chinese parallel corpus which include duplicate sentences and sentences in different languages other than source or target (filtered by our language detector tools).
Pre-processing steps include escape character transformation, text normalization, language filtering and word segmentation. There are lots of escape characters in the existing parallel and web crawled which do not occur in development set. As a result, we transform all these escape characters into corresponding marks with a well designed rule-based method to make it consistent between the training and evaluation.
Text normalization step mainly focuses on normalization of numbers and punctuation. Based on analysis on development set, we found that in Chinese, most of the punctuation are double byte characters (DBC), while most of the numbers are single byte characters (SBC). However, most of the numbers and punctuation in Japanese are double byte characters (DBC). Hence we normalize the numbers and punctuation format to make it the same as development set.
In word segmentation step, we apply Jieba 3 as our Chinese word segmentation tool for segmenting Chinese parallel data and monolingual data. For Japanese text, word segmentation is used Mecab (Toshinori Sato and Okumura, 2017). After preprocessing, we filter the training corpus as mentioned in section 3.2.
Finally, we apply Byte Pair Encoding (BPE) (Sennrich et al., 2016b) in source language since it has the best performance on preliminary machine translation experiments.
For target side, we determine to use character granular because character level decoder could perform better in our preliminary experiments. Post-processing steps are similar to preprocessing without filtering. We apply escape character transformation, text normalization and unknown words (UNK) processing steps on machine translation results. The same methods are used to implement escape character transformation and text normalization as pre-processing. For UNK processing, we find some of the numbers can not be well translated by model and we replace these UNKs with the numbers in source sentence. Otherwise, we remove the UNK symbols.
Parallel Data Filtering
The following methods are applied to further filter the parallel sentence pairs.
We remove sentences longer than 50 and select the parallel sentences where the length ratio (Ja/Zh) is between 0.53 and 2.90. We then calculate word alignment of each sentence pair by using fast align 4 (Dyer et al., 2013). The percentage of aligned words and alignment perplexities are used as the metric where the thresholds are set as 0.4 and −30 respectively. Through the above filtering procedure, the number of the remaining data is reduced from 20.9M to 15.7M, as shown in Table 2.
Monolingual Data Filtering
It is proven that back-translation is a simple but effective approach to enhance the translation quality as described in Section 2.2. To achieve that, we extract the high-quality monolingual sentences from the provided unaligned web crawled data. After removing illegal sentences from web crawled corpus, we limit the maximum sentence length as 50 and remove dirty data by a language model. Specially, we use KenLM 5 toolkit to train two language models with Japanese and Chinese monolingual data extracted from the provided parallel corpus existing parallel. We then rank the sentences based on the perplexities calculated by the trained language models and filter by perplexity threshold of 4 for Chinese and 3 for Japanese. Note that the perplexities are normalized by sentence lengths. obtain 6.1M and 16.4M monolingual sentences for Japanese and Chinese separately. The filtering results are presented in Table 3. The obtained monolingual sentences are fed to the trained model to generate pseudo parallel sentence pairs, which are employed to boost the performance of the model.
Domain Data Processing
Although the amount of provided training data is large enough, it is a noise set of web data built from multiple domain sources. Koehn and Knowles (2017) the training data. Only the same or similar corpora are typically able to improve translation performance. Therefore, we apply domain adaptation methods in this task. Adaptation methods for neural machine translation have attracted much attention in the research community (Britz et al., 2017;Wang et al., 2017;Chu and Wang, 2018;Zhang and Xiong, 2018;Wang et al., 2020). They can be roughly classified into two categories, namely data selection and model adaptation. The former focuses on selecting the similar training data from out-of-domain parallel corpora, while the latter focuses on the internal model to improve model performance. Following these two categories, our domain data processing takes the following steps, as shown in Figure 4. Domain Label In this task, there are two kinds of domain labels provided: domains in existing parallel and domains in web crawled parallel. Since the later is mainly source document index for each sentence pair, the former is more meaningful for domain classification. We categorize the domain label of existing parallel data into three commonly used classes, namely Wiki, Spoken, and News. The domain Wiki includes wiki facebook, wiki zh ja tallip2015 and wiktionary. The label Spoken includes ted and opensubtitles. The label News includes global-voices, newscommentary and tatoeba.
Domain classification Data selection can be conduct in supervised or unsupervised manners (Dou et al., 2019). Since there is a provided data source descriptive file in the existing parallel data which can be regarded as domain labels, we choose the supervised way here. We use two BERT models pretrained on Chinese 6 and Japanese 7 data, respectively. Then the BERT models are fine tuned as a text classification task, based on the source and target side of existing parallel with three domain label we defined. Since the domain data is uneven, we also adopt oversampling and use extra data to enlarge News domain For the remaining data in web crawled parallel, we use the classification model to classify the total data into the three different domains. The statistics of domain data we used is shown in Table 4.
Decoding Stage Considering the test set is also composed of a mixed-genre data, we first classify the domain of each sentence in the test set and obtain the probabilities corresponding to each domain. Then we apply a weighted ensemble method to integrate NMT models. Specifically, when computing the output probability of the next word, we multiply the output probability in each domain specific translation model with the corresponding domain probability of each sentence.
Other Data Resource
The task description says that the test data is a mixture of genres but the provided development set is mainly from spoken domain. Furthermore, we find that the domain distribution of the training data is severely unbalanced (as shown in Table 4). Especially, the data of News domain is quite limited. Due to above two reasons, we decided to crawl some data from other domains.
It is easy to find that hujiangjp 8 which is a website helping people to study foreign languages contains some parallel Chinese-Japanese sentences. Accordingly, we crawled all the available data in this website before test data release. The total amount of extra data consists of 12, 665 parallel sentences. We randomly select 4, 877 sentence pairs to build an extra development set. When training each domain model, all the extra data are used as part of News domain. We 6 https://github.com/ymcui/ Chinese-BERT-wwm 7 https://github.com/cl-tohoku/ bert-japanese 8 https://https://jp.hjenglish.com/new/ find that 383 Chinese→Japanese pairs and 421 Japanese→Chinese pairs in the crawled data are overlapped with the final test set. We just used the originally trained model to decode the test set and decided not to retrain our model since it will take much time and the organizers remind that models cannot be changed after the test set is released. Anyway, we also suggest to test the translation quality on the remaining test set excluding the overlapped sentences.
Experiment Setup
Our implementation of Transformer model is based on the latest release of Fairseq. We use Transformer-Big as basic setting, which contains layers of N = 6 for both encoder and decoder. Each layer consists of a multi-head attention sublayer with heads h = 16 and a feed-forward sublayer with inner dimension d f f = 4096. The word embedding dimensions for source and target and the hidden state dimensions d model are set to 1024.
In the training phase, the dropout rate P drop is set to 0.1. In the fine tuning phase, the dropout rate is changed to 0.3 to prevent over-fitting. We use cross entropy as loss function and apply label smoothing of value ǫ ls = 0.1. For the optimizer, we use Adam (Kingma and Ba, 2015) with β 1 = 0.9, β 2 = 0.98 and ǫ = 10 −8 . The initial learning rate is set to 10 −4 for training and 10 −5 for fine tuning.
The models with complete training data are trained on 4 GPUs for 100,000 steps. For the dataset with knowledge distillation or backtranslation, the models are trained for 150,000 steps. We validate the model every 1,000 minibatches on the development data and perform early stop when the best loss is stable on validation set for 10,000 steps. At the end of training phase, we average last 20 checkpoints for each single model of general domain. In fine tuning phase, we use the averaged model of general domain as starting point for initializing the domain model, and continue training on 1 GPU with domain data for 50,000 steps without early stop. The batch sizes in training and fine tuning are set to 32768 and 8192 respectively. Japanese→Chinese (JA→ZH) translation directions. We report the character BLEU score calculated with multi-bleu-detok.perl script. As shown in the result, filtering with complete parallel data plays an important role in our system. Techniques of back translation and knowledge distillation consistently improve the BLEU score. When applying domain classification, we classify each sentence using the Bert-based domain classifier and decode each sentence with corresponding domain model. As for combination methods, we build six separate models with three domain (Wiki, Spoken and News) fine tuned on two large synthetic data (back translation and knowledge distillation). In ensemble baseline, all of these models share the same weight in predicting word distributions. Weighted ensemble indicates we apply different weights for the ensemble models, in which the weights are obtained by the domain classifier. With weighted domain ensemble, our system achieves the best performance on development data in terms of BLEU, and surpass the single baseline systems by 7.34 BLEU for Chinese→Japanese and 8.36 BLEU for Japanese→Chinese.
Result
We also find a performance drop with reranking. The reason may be that we train the reranking models on the complete parallel data, which is from general domain and may assign lower score for domain specific translations. As a result, our submission is based on the weighted ensemble system, which performs best in our experiments.
Analysis
We compare the performance of different model variations and token granularities on Chinese→Japanese development data. The data we used to train the models is existing parallel data, which consists of 1.9M parallel sentences.
For the model variations, we compare Relative Position (Shaw et al., 2018), Dynamic Convolutions (Wu et al., 2019) and Transformer Base and Big settings (Vaswani et al., 2017). As shown in Table 6, The best result is produced by Transformer Big setting, which is used as default when training on large datasets.
For the token granularities, we report the result with four tokenization methods: Word→Word, Character→Character, BPE→BPE and BPE→Character. As shown in table 7, adopting BPE in source side and Character in target side performs better than other token granularities, which is used in our submission systems.
We notice that there exits a large divergence between the two translation directions when using complete parallel data and process with parallel data filtering. We have verified the result and the parallel data in depth. We find that the quality of Japanese data is lower. For example, there are sentences consist of punctuations only, which may harm the target side language model learned by the decoder. After parallel data filtering, the invalid sentences are removed and thus the translation quality of ZH-JA is improved.
We also find that the provided development data is mainly from spoken domain, and thus we use our collected data as extra development set from other domain to investigate the general performance of single model. The result is shown in table 8. We find there exists a small gap between provided development data and our collected data, which indicates that the domain information may further improve the translation quality, and thus leads us to utilize domain transfer and ensemble techniques. Note that the extra development set is only used in single models. When it comes to system combination, these data are added into News domain since the size of News domain data in parallel dataset is extremely smaller than other domains (Section 3.4).
Conclusion
We present the CASIA's neural machine translation system submitted to IWSLT 2020 Chinese→Japanese and Japanese→Chinese open domain translation task. Our system is built with Transformer architecture and incorporating the following techniques: • Deliberate data pre-processing and filtering • Back-translation of selected monolingual corpus • Knowledge distillation from multi polity teacher models • Domain classification and weighted domain model ensemble As a result, our final system achieves substantial improvements over baseline system. | 2020-06-26T13:06:47.654Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "8b2aae249e0a6961fb4790644561247620c40791",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.iwslt-1.15.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "4de3b4deaa82d9e8dc8bde2b5d1574b7ba686d77",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
202624899 | pes2o/s2orc | v3-fos-license | Temperature-dependent magnetocrystalline anisotropy of rare earth/transition metal permanent magnets from first principles: The light RCo$_5$ (R=Y, La-Gd) intermetallics
Computational design of more efficient rare earth/transition metal (RE-TM) permanent magnets requires accurately calculating the magnetocrystalline anisotropy (MCA) at finite temperature, since this property places an upper bound on the coercivity. Here, we present a first-principles methodology to calculate the MCA of RE-TM magnets which fully accounts for the effects of temperature on the underlying electrons. The itinerant electron TM magnetism is described within the disordered local moment picture, and the localized RE-4f magnetism is described within crystal field theory. We use our model, which is free of adjustable parameters, to calculate the MCA of the RCo$_5$ (R=Y, La-Gd) magnet family for temperatures 0--600 K. We correctly find a huge uniaxial anisotropy for SmCo$_5$ (21.3 MJm$^{-3}$ at 300 K) and two finite temperature spin reorientation transitions for NdCo$_5$. The calculations also demonstrate dramatic valency effects in CeCo$_5$ and PrCo$_5$. Our calculations provide quantitative, first-principles insight into several decades of RE-TM experimental studies.
The excellent properties of rare earth/transition metal (RE-TM) permanent magnets have facilitated a number of technological revolutions in the last 50 years. Now, the urgent need for a low carbon, low emission economy is driving a global research effort dedicated to improving RE-TM performance for more efficient deployment in the drive motors of hybrid and electric vehicles [1]. RE-TM magnets combine the large volume magnetization and high Curie temperature of the elemental TM magnets Fe or Co with the potentially huge magnetocrystalline anisotropy (MCA) of the REs [2]. The REs consist of Sc, Y and the lanthanides La-Lu, but it is Y and the light lanthanides, i.e. those with smaller atomic masses than Gd, which are most attractive for applications due to their lower criticality [3]. Nd and Sm stand out thanks to the highly successful Nd-Fe-B and Sm-Co magnets [4][5][6][7], but Ce has the attraction of having a low cost and high abundance compared to the other REs [8].
Traditionally, RE-TM magnet research has been driven by experiments. First-principles computational modelling can uncover fundamental physical principles and provide new directions for RE-TM magnet design [9,10], but faces two challenges. First, RE-TM magnetism originates from both itinerant electrons and more localized lanthanide 4f electrons [11]. Although the local spindensity approximation to density-functional theory [12] (DFT) satisfactorily describes the itinerant electrons, the 4f electrons require specialist techniques like dynamical mean field theory [13,14], the local self-interaction correction (LSIC) [15], the open-core approximation [16][17][18], or Hubbard +U models [19,20]. Second, DFT calculations are most easily performed at zero temperature, but under actual operating conditions the RE-TM magnetic moments are subject to a considerable level of thermal disorder [21]. The disordered local moment (DLM) picture accounts for this disorder within DFT [21] and, combined with the LSIC has been used successfully to calculate magnetizations, Curie temperatures and phase dia- grams of itinerant electron and RE-based magnets [22][23][24]. DFT-DLM studies of the MCA have also been performed on itinerant electron and Gd-based magnets [25][26][27], but it is important to realise that these materials are special cases, where there is no contribution to the MCA from 4f electrons interacting with their local environment (the crystal field). A first-principles, finite temperature theory which accounts for these crystal field effects (and is therefore applicable to general RE-TM magnets like Nd-Fe-B or Sm-Co) has, up to now, proven elusive.
In this Rapid Communication, we rectify this situation and demonstrate a first-principles theory of the MCA of RE-TM magnets including the crystal field interaction. Fundamentally, the theory takes a model originally developed by experimentalists, and obtains all of the quantities required by the model from DFT-based calculations. We demonstrate the theory on the RECo 5 family of magnets, (RE = Y, La, Ce, Pr, Nd, Sm and Gd). The RECo 5 phase, shown in Fig. 1, is important due to its presence in SmCo 5 and in the cell-boundary phase of commercial Sm 2 Co 17 [7,28,29]. We calculate anisotropy constants and spin reorientation transition temperatures to analyse experimental data obtained 40 years ago [30,31].
The model partitions the RE-TM magnet into an itinerant electron subsystem (originating from the TM, and RE valence electrons), and a subsystem of stronglylocalized RE-4f electrons [32]. Critically, the RE ions tend to adopt a 3+ state with a common s 2 d valence structure [33]. As a consequence, for each RE-TM magnet class the itinerant electron subsystem is essentially independent of the specific RE [32], so its properties can be obtained for the most computationally convenient prototype (e.g. RE = Y or Gd) The itinerant electrons drive the overall magnetic order, primarily determining the Curie temperature T C ; for example, T C differs by only 20 K in Y 2 Fe 14 B and Nd 2 Fe 14 B [34]. The itinerant electrons also drive the RE-4f magnetic ordering through an antiferromagnetic exchange interaction [35], with an exchange field of a few hundred Tesla at cryogenic temperatures [36]. RE-RE interactions are relatively weak [37].
The RE-4f subsystem contributes to the magnetic moment and can have a small effect on T C [24], but its most important contribution is to the MCA, which in turn provides an intrinsic mechanism for coercivity [38]. The origin of the potentially huge MCA of RE-TM magnets is illustrated in Fig. 1. The itinerant electrons and surrounding ions set up a (primarily) electrostatic potential with the symmetry of the RE site [32], known as the crystal field (CF). The CF calculated for RECo 5 is shown as a contour plot in Fig. 1, with electrons attracted to the blue regions and repelled by the red. Meanwhile the unfilled RE-4f shells form non-spherically-symmetric charge clouds coupled to the magnetic moment direction through a strong spin-orbit interaction [32]. The charge clouds are elongated either parallel or perpendicular to the moment direction depending on Hund's rules, with the opposing examples of Sm and Nd shown in Fig. 1. Placed in the RECo 5 CF, the charge clouds will preferentially orientate with their elongated part lying in the attractive region, generating the MCA [39]. A secondary contribution to the MCA is provided by the itinerant electrons, with YCo 5 (which has no RE-4f electrons) having a MCA energy of 5 MJm −3 at room temperature [30], ten times larger than hexagonal Co [40].
The above ideas are formulated mathematically by introducing a Hamiltonian for the RE-4f electronsĤ [41]: (1) Here,L andŜ are the orbital and spin angular momentum operators, where for each RE 3+ ion L and S are fixed by Hund's rules. λ quantifies the spin-orbit interaction, and µ B is the Bohr magneton. H is an external magnetic field, and B exch is the exchange field originating from the itinerant electrons. V (r i ) is the crystal field potential, where i labels each 4f electron. The Hamiltonian in Eq. 1 acts upon the RE-4f wavefunction which can be expressed schematically as a radial part multi-plied by the angular part |J, L, S, M J , where the quantities within the ket are standard quantum numbers [32]. Equation 1 is diagonalized within the manifold of states |J, L, S, M J = J, J − 1, J − 2, ..., −|J| . We consider the ground J = |L − S| multiplet, along with the first excited multiplet J = |L−S|+1 for Pr and Nd, and also the next excited multiplet J = |L − S| + 2 for Sm. Angular parts of the matrix elements of the CF term in Eq. 1 are calculated by decomposing the states into |L, S, M L , M S form, and then using the operator form of the potential [42], which introduces Clebsch-Gordan coefficients and l-dependent (Stevens) prefactors [32,43]. The radial parts are incorporated into the CF coefficients B lm [33], described in more detail below. For a given temperature T , we use the eigenvalue spectrum ofĤ to construct the partition function Z RE and the RE-4f free energy con- The quantities forming the itinerant contribution to the free energy F itin are determined from DFT-DLM calculations. F itin depends on the Co magnetization M Co : where K 1 and K 2 quantify the itinerant electron contribution to the anisotropy, and cos θ =M Co ·ẑ, withẑ pointing along the c axis. p quantifies the magnetization anisotropy, which in the RECo 5 compounds can cause the Co magnetization to reduce by a few percent from its maximum value M 0 Co [44]. F itin depends on temperature through the quantities K 1 , K 2 , M 0 Co and p. The two subsystems are coupled together by noting thatM Co = −B exch , i.e. the exchange field felt by the RE-4f electrons points antiparallel to the Co magnetization [16]. The equilibrium direction of M Co (equivalently, of B exch ) minimizes the sum of F itin and F RE . The RE magnetization is obtained as M RE = −µ B L + 2Ŝ T , with T denoting the thermal average over the eigenvalue spectrum of Eq. 1 at the equilibrium value of B exch , and the total magnetization is M Tot = M RE + M Co . The magnetization measured along the field direction M is M Tot ·Ĥ, whilst the easy direction of magnetization α is obtained as cos −1 (M Tot ·ẑ) in zero external field.
B exch , K 1 , K 2 , M 0 Co and p are obtained using the FP-MVB procedure developed to calculate F irst-P rinciples M agnetization V s B -field curves for GdCo 5 [27]. FP-MVB fits DFT-DLM torque calculations [45] for different magnetic configurations to extract the desired quantities [27,46]. For all RE 3+ Co 5 compounds considered here, we exploit the isovalence of the RE 3+ ions and substitute the RE with Gd in the FPMVB calculations. This step ensures no erroneous double counting of the CF contribution which is already accounted for by F RE , but still captures the coupling between TM-3d and RE-5d valence states. We do however use the (experimental) lattice parameters appropriate for each RE [47,48].
Co and p were fitted to calculations where θ was varied between 0-90 • in 10 • intervals. B exch was obtained by fitting the torque induced by introducing a 1 • canting between the Gd and Co sublattices to a free energy of the form −B exch · M Gd,s where M Gd,s is the spin moment of Gd including thermal disorder, i.e. the local spin moment weighted by the Gd order parameter [46]. For itinerant CeCo 5 and PrCo 5 (see below), the quantities in Eq. 2 were obtained directly from DFT-DLM calculations on these compounds. The calculations used the atomic sphere approximation, angular momentum expansions with maximum l = 3, and an adaptive reciprocal space sampling to ensure high precision [49]. Exchange and correlation were modelled within the local spin-density approximation (LSDA) [50], with an orbital polarization correction applied to the Co-d electrons [51] and the LSIC applied to Gd. The calculated quantities are given as Supplemental Material [52].
We calculate the RECo 5 CF coefficients using an yttrium-analogue model [33]. The basic premise here is that due to the isovalence of RE 3+ ions, the RE 3+ Co 5 CF potential (which originates from the valence electronic structure) can be substituted with that of Y 3+ Co 5 . This step ensures no double-counting of RE-4f electrons, and allows the use of projector-augmented wavebased DFT calculations to calculate the CF potential to high accuracy without needing special methods to treat the 4f electrons [33]. The CF potential is combined with the radial RE-4f wavefunctions obtained in LSIC calculations. At the RE site (symmetry D 6h ) there are four independent components of the CF potential which affect the 4f anisotropy, with (l, m) = (2,0), (4,0), (6,0) and (6,6)[=(6-6)]. The calculated CF coefficients are given as Supplemental Material [52]. We note that this method implicitly neglects any temperature dependence of the CF coefficients themselves, and future work must evaluate the effects of finite temperature, e.g. due to charge fluctuations or lattice expansion [47]. The calculations were performed using the GPAW code [53] within the LSDA. A plane wave basis set with a kinetic energy cutoff of 1200 eV was used, and reciprocal space sampling performed on a 20×20×20 grid. The spin-orbit parameter λ was calculated using the RE-centred spherical potential V 0 (r) from the LSIC calculation as λ = drr 2 n 0 4f (r)ζ(r)/(2S) [54], where the normalized spherically-symmetric 4f density n 0 4f (r) was also obtained from the LSIC calculation [33] and ζ(r) =h These λ values yield anisotropy constants indistinguishable from those calculated using experimental λ values extracted from spectroscopic measurements [41,54].
We now demonstrate the theory by calculating experimentally-measurable quantities. Figure 2 molenko in 1976 [30] for YCo 5 , LaCo 5 , NdCo 5 , SmCo 5 and GdCo 5 . They were extracted using the Sucksmith-Thompson (ST) method [40], which is based on the expression for the dependence of the free energy of a uniaxial ferromagnet on the magnetization direction Θ: As explained in detail in the Supplemental Material [52], measuring the magnetization along the hard direction and plotting the data as an Arrott plot (H/M vs. M 2 ) [55] allows κ 1 and κ 2 to be extracted from the gradient and intercept. Equation 3 and the ST method strictly apply to ferromagnets, but the same technical procedure can be applied to RE-TM ferri magnets too [27]. However, the fact that the external field can induce a canting between the RE and TM moments means that the extracted anisotropy constants for the ferrimagnet are effective ones, which measure both the anisotropy of the individual sublattices and the strength of the exchange interaction keeping the spin moments antialigned [27,46]. The experimental data in Fig. 2 demonstrates the diversity in κ among RECo 5 . The behavior of YCo 5 and LaCo 5 , where the RE is nonmagnetic, is rather similar. Both compounds display uniaxial anisotropy associated with the itinerant electron subsystem. GdCo 5 is still uniaxial, but is softer than YCo 5 and LaCo 5 . Since the filled Gd-4f subshell makes no CF contribution to the anisotropy, this reduction in κ 1 is due to the field-induced canting of the Gd and Co magnetic moments [27]. SmCo 5 stands out for having the largest uniaxial anisotropy over the entire temperature range, with a room temperature value of 17.9 MJm −3 [30]. NdCo 5 has a negative κ 1 at low temperatures which switches to positive at approximately 280 K, and also has a non-negligible κ 2 , at vari- ance with the other compounds. As discussed below, this variation results in NdCo 5 undergoing spin reorientation transitions from planar→cone→uniaxial anisotropy [31]. The right panel of Fig. 2 is the main result of this work, showing the anisotropy constants obtained entirely from first principles. We calculated hard axis magnetization curves, and then performed the ST analysis on the data to extract κ 1 and κ 2 , in exact correspondence with the experimental procedure [52]. The calculations reproduce all of the experimentally-observed behavior. SmCo 5 and NdCo 5 have strong uniaxial and planar anisotropy at zero temperature, respectively. NdCo 5 has a non-negligible κ 2 and a value of κ 1 which changes sign between 280-290 K. YCo 5 and LaCo 5 have uniaxial anisotropy, with LaCo 5 slightly stronger. GdCo 5 also has uniaxial anisotropy but is softer, and has a complicated temperature dependence.
Comparing in more detail, we find agreement between experimental and calculated κ values to within a few MJm −3 for all but the lowest temperatures, where the classical statistical mechanics of DFT-DLM calculations leads to inaccuracies [37], and high temperatures, where experimentally the compounds might undergo decomposition [27]. We calculate κ 1 at room temperature for SmCo 5 to be 21. The calculations also reproduce more subtle features, for instance the slightly enhanced anisotropy (by less than 1 MJm −3 ) of LaCo 5 over YCo 5 . The least good agreement is for GdCo 5 , especially at higher temperatures; however, more recent measurements of κ 1 found different behavior at elevated temperatures [27,30]. At lower temperatures, we note that the present calculations do not include the magnetostatic dipole-dipole contribution to the MCA, or the Gd-5d contribution to the itinerant electron anisotropy, which we previously calculated to be 24% the size of the Co contribution [27]. We conclude that omitting the magnetostatic and RE-d contribution to the anisotropy is reasonable for nonmagnetic REs or those with unfilled 4f shells (whose RE-4f anisotropy is much larger), but is less suitable for Gd-based magnets.
Unlike the other materials in Fig. 2, the easy direction of magnetization of NdCo 5 lies in the ab plane at low temperature, with polar angle α = 90 • . The anisotropy within the ab plane is determined by the B 6±6 CF coefficients. Both our calculations and experiments find the easy direction to be the a axis, which points from the RE to between its nearest neighbour Co atoms [56]. Experimentally, as T is increased from zero to past approximately 240 K, a spin reorientation transition (SRT) occurs and the magnetization begins to rotate towards the c-axis, i.e. planar→cone anisotropy. This rotation continues (decreasing α) until approximately 280 K, when a second SRT (cone→uniaxial) occurs. Further increasing T sees α remaining at 0 • up to T C . The presence of the SRTs close to room temperature led to the proposal that NdCo 5 may be a candidate material for magnetic refrigeration [57]. The evolution of α as measured experimentally in Ref. 31 is shown in Fig. 3.
The calculated variation of α with temperature is also shown in Fig. 3. We see that the agreement with experiment is remarkably good, with calculated SRT temperatures of T SRT1 = 214 K (plane→cone) and and T SRT2 = 285 K (cone→uniaxial). There is also good agreement between calculated and experimental κ values, especially in the SRT region. Indeed, the SRTs are intimately linked to the temperature dependence of κ 1 and κ 2 , with the plane-cone SRT occurring when κ 1 = −2κ 2 and the coneaxis SRT occurring when κ 1 crosses zero [58,59].
The calculations provide the underlying physical explanation of the SRTs, which result from a competition between the uniaxial anisotropy of the itinerant electrons and a preference for the oblate Nd 3+ charge cloud to have its axis lying in the ab plane (Fig. 1). As the temperature increases, the Nd moments disorder more quickly than the Co, due to the relative weakness of the RE-TM exchange field B exch compared to the exchange interaction between itinerant moments [24]. As a result, the negative contribution to κ 1 from Nd reduces quickly with temperature, leaving the positive uniaxial contribution from the itinerant electrons. Obtaining realistic SRT temperatures therefore requires accounting for the itinerant electron anisotropy, the crystal field potential and the exchange field at a comparable level of accuracy.
We finally consider CeCo 5 and PrCo 5 . Ce has a wellknown tendency to undergo transitions between trivalent and tetravalent valence states, as seen for instance in the α-γ transition [60,61]. The LSIC describes stronglycorrelated RE-4f electrons, with them forming a narrow band several eV below the Fermi level [24]. Without the LSIC, the 4f electrons are less correlated and more itinerant, appearing as wider bands close to the Fermi level. The LSIC finds a lower-energy ground state if the enhanced correlation offsets the energy penalty associated with the stronger localization [15,24]. Of the RECo 5 compounds, the LSIC predicts a higher energy ground state only for CeCo 5 , implying that the Ce-4f electron is not strongly localized. Practically, we model compounds with more itinerant (weakly correlated) RE-4f electrons by performing non-LSIC DFT-DLM calculations on RECo 5 , with only F itin contributing to the free energy. The values of κ 1 calculated in this way for CeCo 5 and PrCo 5 are labelled "itinerant" in Fig. 4. We also show κ 1 values labelled "3 + ", calculated for strongly localized RE-4f electrons using the same method as in Fig. 2. The Ce and Pr moments are held collinear to the Co moments in the itinerant calculations [52]. Figure 4 shows a dramatic difference in the anisotropy constants calculated for the different RE-4f valences. Pr 3+ Co 5 has an ab plane anisotropy at low temperature, which is stronger than NdCo 5 . This behavior is in fact expected; the leading crystal field contribution to the MCA scales as J(J − 1 2 )α J (α J being the Stevens factor), and this quantity is larger for Pr than Nd [32]. The calculated plane→cone and cone→uniaxial SRTs occur at 235 K and 297 K respectively, which are higher temperatures than those calculated for NdCo 5 . Ce 3+ Co 5 meanwhile is calculated to have cone anisotropy at zero temperature, with α = 80 • . A cone→axis SRT occurs at 100 K, after which the compound has uniaxial anisotropy. The presence of only one SRT shows that Ce 3+ has a weaker planar anisotropy than Pr 3+ or Nd 3+ . This weaker anisotropy is due to CeCo 5 having a reduced B 20 CF coefficient, which correlates with a contracted a lattice parameter [52].
If instead the RE-4f electrons are treated as itinerant, both CeCo 5 and PrCo 5 are found to have strong uniaxial anisotropy. At low temperatures CeCo 5 has the higher value of κ 1 , (23.5 MJm −3 at 0 K), while above 200 K, κ 1 of PrCo 5 is larger. The κ 1 values exceed those calculated for YCo 5 and LaCo 5 , showing that delocalizing the RE-4f electrons boosts the uniaxial anisotropy.
The experimental anisotropy constants from Ref. 30 are also shown in Fig. 4. The experiments support the picture obtained from the total energy calculations, that the Ce-4f and Pr-4f electrons are more itinerant/localized (weakly/strongly correlated) respectively. CeCo 5 has uniaxial anisotropy across the entire temperature range. For PrCo 5 , although κ 1 is negative at low temperature, its magnitude is smaller than that measured for NdCo 5 (-6.5 MJm −3 vs. -33.5 MJm −3 at 4 K). As a result, at low temperature the easy magnetization direction of PrCo 5 does not lie in the ab plane, but rather in a cone with α = 23 • [62]. Increasing the temperature decreases α, and a cone-axis SRT occurs at 105 K [62]. Therefore, although the Pr ions do favour abplane anisotropy, their contribution is weaker than from the Nd ions in NdCo 5 , at variance with the CF picture. Overall, in CeCo 5 (PrCo 5 ) experiments find a smaller uniaxial (planar) contribution from Ce (Pr 3+ ). As a result, the calculated uniaxial anisotropy of CeCo 5 is larger than found experimentally, while the plane→cone SRT of PrCo 5 at 235 K is missing from experiments.
The anomalous behavior of PrCo 5 in the context of CF theory was identified in Ref. [63], where it was proposed that in PrCo 5 , Pr assumes a mixed valence state, e.g. Pr 3.5+ , whose properties lie between Pr and Ce. The calculations shown in Fig. 4 support this view, if we make the reasonable assumption that the anisotropic properties of the mixed valence state are bounded by those of the strongly localized and more itinerant (strongly and weakly correlated) Pr-4f electrons. In a similar way, the experimentally-observed reduction in CeCo 5 uniaxial anisotropy compared to the calculations could be explained if the Ce-4f electron was more localized (correlated) than predicted by the "itinerant" calculations. From Fig. 4, such an electron would be expected to have a reduced contribution to the uniaxial anisotropy. Within this picture, encouraging the itineracy of the Ce-4f electron through e.g. chemical pressure, would boost the uniaxial anisotropy of CeCo 5 .
Apart from highlighting the 4f -electron physics of Ce and Pr, our calculations serve as a reminder of the remarkable properties of SmCo 5 . As well as its huge zero temperature uniaxial anisotropy, the large spin moment of Sm strengthens its coupling to the exchange field, so that the Sm moments stay ordered up to higher temperatures. Furthermore, mixing of the higher-J multiplets also boosts the anisotropy [64]. As a result, as shown in Fig. 2, the κ 1 value of SmCo 5 remains larger than that of YCo 5 and LaCo 5 (where the RE is nonmagnetic), even at 600 K. Previously we have shown that the electronic structure of SmCo 5 close to the Fermi level also gives it the highest T C of the RECo 5 compounds [24].
In summary, we have demonstrated a framework to calculate the finite-temperature MCA of RE-TM magnets. Combined with the previously established DFT-DLM method which provides finite-temperature magnetization and T C [24], we have a full framework to calculate the intrinsic properties of RE-TM magnets which requires no experimental input beyond the crystal struc-ture. The validation of our method on the RECo 5 magnet class opens the door to tackling other RE-TM magnets, like Nd-Fe-B, REFe 12 and Sm 2 Co 17 . The good performance of the calculations for SmCo 5 will allow us to propose strategies to improve this magnet, e.g. through modification of the CF potential and/or exchange field through TM doping or application of pressure. More generally, our work realizes the proposal made two decades ago in Ref. [65], which suggested that rather than trying to compare first-principles CF coefficients to experiment (themselves obtained by fitting), the comparison should instead be made for anisotropy constants.
The present work forms part of the PRETAMAG project, funded by the UK Engineering and Physical Sciences Research Council, Grant No. EP/M028941/1. We thank H. Akai and M. Matsumoto for useful discussions. | 2019-09-05T02:42:59.130Z | 2019-10-03T00:00:00.000 | {
"year": 2019,
"sha1": "c26954bac587f37a5ced1e73839bd8690fc93dd7",
"oa_license": null,
"oa_url": "https://ora.ox.ac.uk/objects/uuid:b90d1330-a999-4fad-8be3-eb8a2bea4a6a/download_file?file_format=application/pdf&safe_filename=Patrick_and_Staunton_2019_temperature-dependent_magnetocrystalline_anisotropy_of_rare_earthtransition_metal.pdf&type_of_work=Journal+article",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8e20b1c1f43b7808e8e9e536eceba8834d3bfe1d",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
219607316 | pes2o/s2orc | v3-fos-license | Cooperative Strategies for CO Homologation
Recent approaches in which at least two metal or main-group centres are involved in the homologation of CO are reviewed. We have characterised the strategies into three broad areas: i) the reductive homologation of atmospheric CO at a metal or main group cente ii) the reductive homologation of metal-carbonyl CO units and iii) reductive homologation of CO with M– M, B–Li, Si=Si, and B≡ B bonds.
Introduction
In response to concerns over anthropogenic CO 2 , interest in developing methods to produce liquid hydrocarbons from renewable resources has heightened in recent years. One established technology to achieve this is the Fischer-Tropsch (F-T) process. The F-T process converts syngas mixtures (H 2 /CO) into short-to-medium chain hydrocarbons using heterogeneous transition metal catalysts (Eq. 1). CO 2 can be incorporated through the water-gas shift reaction (Eq. 2).
(Eq. 1) n CO + (2n + 1) H 2 →C n H 2n + 2 +n H 2 O (Eq. 2) CO 2 + H 2 ⇌CO + H 2 O Although syngas is typically produced from coal or natural gas, biomass has also been identified as viable source. 1 F-T catalysis produces hydrocarbons with a range of carbon chain-lengths determined by the Anderson-Schulz-Flory distribution. 2 Carbon monoxide (CO) is a diatomic molecule with bond order of 3. The HOMO is a σ-orbital with a major contribution from the carbon atom, while the LUMO consists of two degenerate π* orbitals ( Figure 1). Accordingly, CO is capable of either acting as a Lewis base or Lewis acid through HOMO and LUMO based reactivity respectively. The reduction of CO is also possible through electron transfer to the LUMO from a suitable reductant.
While homogeneous F-T catalysis remains significantly underdeveloped with respect to the heterogeneous commercial process, there is growing interest in this area.
Homogeneous models are amenable to study by solution techniques (NMR, IR, UV-vis spectroscopy) and can offer unique insight into fundamental reactivity that may be of relevance to heterogeneous F-T catalysis. In the longer term, using homogeneous catalysis to direct the selectivity of F-T reactions to high-value products is an attractive target.
In this frontier article, we summarise contemporary research in CO homologation using cooperative strategies. That is reactions in which two or more components (typically transition metal or main group complexes) react with CO to form carbon chains by forming new C-C bonds. These reactions typically involve three stages: initiation, propagation, and termination. Cooperative effects have been observed in each of the separate stage of chain growth.
The initiation step often involves the reduction of CO with electrons from a transition metal or main group complex.
Results and Discussion i) Reduction of CO at a metal or main-group centre
The reductive homologation of CO was first observed by Justus von Liebig in 1834 through reaction with molten potassium to form "potassium carbonyl" (KCO). 4 In 1960, "potassium carbonyl" was shown to be a mixture of potassium ethynediolate (1) and potassium benzenehexolate (2) salts (Scheme 1a). 5 In 1985 Evans, Atwood and co-workers reported a homogeneous metal complex that could perform a similar reaction with CO to form coupled products (Scheme 1b). Hence, the samarium(II) sandwich complex 3·THF 2 reacted with CO at ambient temperatures and pressures to yield multiple products including 4. 4 contains a ketenecarboxylate motif derived from three CO units that bridges two samarium(III) centres. 6 Similar reactivity is accessed through the lanthanum congener of 3, in this case resulting in a diamagnetic product that is amenable to characterisation by multinuclear NMR spectroscopy. 7 Evans noted that the inability of f-block complexes to form stable CO adducts alongside their high oxophilicity affords this unique chain growth reactivity. Both reactions occur with oxidation of Ln(II) to Ln(III) (Ln = Sm, La).
In 2006, Cloke and co-workers reported the reductive cyclotrimerisation of CO with the uranocene(III) complex 5a to form a deltate anion (7). 8 The reaction occurs with oxidation of U(III) to U(IV) and requires cooperative action of two equiv. of the actinide complex. Remarkably, further studies showed that acyclic and cyclic carbon chains of different lengths and shapes -ethynediolate (6) and squarate (8) -could be accessed by controlling either the stoichiometry of the reaction or the steric profile of the ligands on the uranium centre (Scheme 2a). [9][10][11] 10 In the case of the deltate, Maron and co-workers propose that the same uranium(III) (mono)carbonyl is reduced by a second equivalent of the uranium(III) starting material resulting in the formation of a μη 2 :η 2 -CO ligand bridging two uranocene centres. A second CO equivalent reacts with this bridging CO ligand to directly form a new C-C bond and a {μ-η 2 :η 2 -C 2 O 2 } 2moiety that spans two uranium centres. A third equivalent of CO inserts into the U-C bond of this intermediate which, following a series of isomerisation steps, results in the formation of the deltate product (Scheme 2c). 12 While the proposed uranium(III) carbonyl intermediates have not been isolated, closely related structurally characterised uranium(III) carbonyl complexes are known. 13,14 The potential for f-block reductants to couple CO units together has been further elaborated by other groups, though exclusively towards the formation of ethynediolate products. Arnold and co-workers showed that simple uranium(III) alkoxide and amide complexes (9a-c) reduce CO under ambient conditions to ethynediolate containing products (10a-c). In the case of the amide (9a), further reactivity of the ethynediolate is observed; C-H activation of a methyl group of the ligand occurs by addition across the C≡C bond, forming metallocycle 11a (Scheme 3). 15,16 A uranium(III) complex supported by a tripodal (tris)amidoamine ligand (12) also reacts with CO to form an ethynediolate (13), as reported by Liddle and co-workers. Notably 12 can be regenerated from the ethynediolate uranium(IV) species through further silylation and reduction steps. This creates a closed chemical loop with respect to the uranium fragment (Scheme 4). 17 Scheme 3: Reduction of CO by uranium tris-amide (9a) and tris-aryloxide (9b-c) complexes to the ethynediolate (10a-c). Further reactivity observed for 10a to 11a. Transition metal analogues of this type reactivity are far rarer, those that are known often involve direct participation of metal-ligand bonds. For example, in 2018, Kays and co-workers reported the deoxygenative coupling of CO using bis(terphenyl) iron(II) complex 14 to form squaraine 15, and iron carboxylate complex 16 along with concomitant formation of iron pentacarbonyl (Scheme 5). 18 The reaction is proposed to proceed through the ketene intermediate 17, which derives from the fragmentation of CO into C and CO 2 . Dimeristion of 17 forms 18, which reacts with excess CO to yield the products 15 and 16, along with concomitant elimination of iron pentacarbonyl. This system is unusual as the organometallic reagent does not reduce CO but facilitates a disproportionation of CO. Cooperativity between the two iron centres is critical in formation of the squaraine.
While the reaction of low-valent f-block compounds with CO has been studied in detail in the last two decades, analogous chemistry with main-group reagents has only recently come into focus through the study of silylenes. In contrast to the 1ereduction per metal centre of f-block complexes, silylenes function as 2ereductants per silicon(II) centre. Early work in this area demonstrated that the addition of transient silylenes [R 2 Si:] (R = alkyl, aryl) to CO results in the formation of simple adducts of the form [R 2 Si:(CO)], stable only in at -196°C. [19][20][21] .
The CO adduct of a gallium-substituted silylene has been isolated. 22 No coupling products were observed. Only through the inclusion of cooperative strategies has CO coupling been observed with these reagents.
In 2019, Driess and co-workers reported the silicon-mediated coupling of CO with a bis(silylene) compounds supported by dinucleating ligands incorporating either a xanthenyl (18a) or ferrocenyl (18b) backbone (Scheme 6). 23 The resultant products are 1,3-disilyloxetanes (19a-b) in which two silicon(IV) centres are bridged by a ketenylidene and an oxo ligand. The oxo ligand is derived from complete cleavage of a C≡O bond. A total of four electrons are transferred to two molecules of CO. Crucially, cooperativity between the silylene units is necessary for CO reduction: this type of reactivity is not observed using analogous amidinate or β-diketiminate mono(silylene) reagents or when the silicon sites of the bis(silylene) compound are separated by a distance of 6 Å.
Upon modifying the dinucleating ligand to include an orthocarborane backbone, a different outcome was observed. 24 Hence, 18c reacts to form 21 via a multistep process (Scheme 6). The first step is proposed to form an identical intermediate to those observed with xanthenyl-and ferrocenyl-based ligands. In the case of the carborane-based ligand this intermediate is unstable and reacts further with cleavage of the Si-C carborane bond and concomitant C ketene -C carborane bond formation, resulting in monomer 20. Dimerisation of 20 results in the crystallographically and spectroscopically characterised product, 21.
More recently, it has been shown that dinuclear systems are not required for CO homologation and that cooperative effects can be achieved through intermolecular rather than intramolecular assistance. Aldridge and co-workers reported the reductive coupling of CO using a boryl-substituted acyclic silylene 22 to form an ethenediolate species 23 (Scheme 7). 25 The acyclic silylene 22 has previously been reported to activate H 2 ; the high reactivity of 22 has been ascribed to the strong σ-donating ability of the boryl ligand and the resultant small HOMO-LUMO gap. 26 Please do not adjust margins Please do not adjust margins
ii) Reduction of M-CO with an external reductant
The previous section highlighted the capability of some redoxactive metal centres to reduce atmospheric CO as a first step for carbon chain growth. An alternative cooperative strategy is to separate the CO binding event from the reduction and chain growth steps. To achieve this, transition metal carbonyls are typically used as the CO source, while a second low-oxidation state reagent is used as a reductant.
In 1982 Bercaw, Mertes, and co-workers reported the reaction of a zirconocene(II) reductant (24) with an iron carbonyl dimer (Scheme 8a). 27 (32). The origin of the ketenylidene ligand is likely two cis-disposed CO units which are coupled with concomitant reductive cleavage of a CO triple bond to form a μ 3 -oxo ligand. 32,33 In 2013, Suess and Peters demonstrated that an iron (bis)alkylidyne complex derived from a metal carbonyl is capable of releasing an alkene upon hydrogenation (Scheme 9). The carbon atoms in the alkylidyne ligands originate from CO. 34 Reduction of iron dicarbonyl complex 33 using potassium, followed by addition of trimethylsilyl triflate resulted in generation of the (bis)alkylidyne complex 34. Reaction of 34 with dihydrogen gas resulted in liberation of the C 2 fragment 35, as exclusively the Z-alkene. Paramagnetic iron-containing products from the hydrogenation have eluded characterisation and are yet to be isolated. This example is notable as H 2 , a key component of syngas used in F-T catalysis, is used to achieve the dissociation of a CO derived hydrocarbon from the transition metal fragment.
In 2016, Buss and Agapie reported the 4-electron deoxygenative coupling of CO using a molybdenum complex (Scheme 10). The Mo(II) dicarbonyl complex 36 can be successively reduced to the Mo(0) 37, Mo(-II) 38, and Mo(-III) 39 species with 2, 4, and 7 (excess) equiv. of KC 8 respectively. The deoxygenative coupling of the CO ligands in the reduced complexes to form C 2 fragments is effected by addition of trialkylsilyl chloride and further equivalents of reductant to form either disilaketene (42a, R = Me) or a disilaethynolate (42b, R = i Pr) products and the molybdenum dinitrogen complex 41.
During the course of these experiments the siloxyalkylidyne intermediate 40 was isolated. Both the structure and reactivity of 40 parallel the reactivity of siloxyalkylidyne complexes explored by Lippard and co-workers (vide supra). 30 The P-Ar-P pincer ligand is a key design motif in this chemistry as the arene functions as a 'reservoir of electrons' capable of stabilising the myriad of oxidation states of molybdenum through a range of coordination (η 6 -, η 4 -, η 0 -) modes. 35 Subsequent computational and synthetic studies on the mechanism show that the C-C bond formation step proceeds via a (bis)alkylidyne-type complex. 36 In 2018, our group reported the reductive homologation of CO Both can be conceptualised in terms of insertion of CO or CO 2 into the Al-C bond of the carbon chain. The interaction of CO and CO 2 with the Lewis acidic aluminium centre plays a key role in the chain-growth processes. This reaction is remarkable in that chain growth occurs with the formation of C 3 and C 4 intermediates that still have reactive sites in the form of reactive M-C bonds. This allowed the defined steps of chaingrowth C n to the C n+1 homologue (n ≥ 2) in a homogeneous F-T model to be observed for the first time. 37 Scheme 10: Four-electron deoxygenative coupling of CO at molybdenum.
iii) Reduction of CO with M-M, B-Li, Si=Si and B≡B bonds
The strategies for carbon chain growth from CO outlined above involve the use of low-valent metal centres which can transfer electrons to CO to initiate a chain growth sequence. An alternative approach to CO homologation involves the use of low-oxidation state compounds that possess a metal-metal or element-element bond. These bonds are potential reactive sites, and reduction of CO can occur by transfer of electrons in the bond to the substrate with concomitant oxidation of the metal centres. The cooperative action of two reactive sites, this time connected directly through a bond in the ground state of the starting material, is a key consideration for the reaction with CO.
Wayland and co-workers first utilised this strategy in 1989. They reported the selective dimerisation of CO using a rhodium porphyrin dimer (51) containing a Rh-Rh bond (Scheme 13). The product of reductive coupling is an ethanedionyl complex derived from the double insertion of CO into the Rh-Rh bond. The product (52) was not amenable to NMR or solid state characterisation and were inferred through IR studies on 12 CO and 13 CO isotopomers. 39 In 2016, Kinjo and co-workers reported the isolation of the boryllithium compound 53 (Scheme 14). Insertion of CO into the B-Li bond of this highly reactive species forms the boraacyllithium 54, with tautomerises to 54'. Dimerisation of 54' results in the lithium ethenediolate 55 which has been tentatively assigned using 11 B NMR spectroscopy. 55 is unstable and can only be observed transiently, however reaction with trimethylsilyl chloride allows the isolation of 56, which has been structurally characterised. Overall the reaction results in the reductive coupling and functionalisation of two CO molecules. In a similar manner, boryllithium 57 reacts with CO to form an unobserved ketene (58), which dimerises above -78°C to form the isolable product 59 (Scheme 15). 41
Scheme 15: Tetramerisation of CO by a boryllithium
Incorporating SmCl 3 into this reaction gave a completely different result (Scheme 16). Samarium(III) chloride (60) and boryllithium 57 react to yield samarium(III) boryl complex 61, which in the presence of CO can go on to form the bora-acyl samarium complex, 62. 62 is proposed to originate from the insertion of CO into the Sm-B bond of 61 and similar reactivity has been observed before. 42 If the reaction with CO occurs in the presence of an excess of 57 both 58 and 62 can be generated in situ. These intermediates can combine to form a C 3 carbon chain bearing both Sm and Li substituents, 63. Both isotopic labelling experiments and DFT calculations support the proposed mechanism. The tandem cooperative CO homologation strategy employed to access a C 3 product (63), provides a novel approach towards the construction and elaboration of carbon chains.
The use of well-defined s-block metal complexes to reductively couple CO was first reported in 2019 by Jones, Maron and coworkers (Scheme 17a). Reaction of a magnesium(I) dimer 64 ad with one equivalent of a donor ligand (L = DMAP) yields an asymmetric derivative of the parent dimer 65a-d (Scheme 17ab). Remarkably, while the parent dimer does not react with CO, the ligated dimer reduces CO and forms carbon chains. Sterically bulkier magnesium centres (64a-b) yield the deltate {C 3 O 3 } 2-, while comparatively less crowded magnesium centres (64c-d) yield a bridging ethenediolate {C 2 O 2 } 2 . The steric profile of the magnesium centres controlling the degree of CO homologation parallels observations by Cloke and co-workers when studying uranocene reductants. [8][9][10][11][12] DFT calculations were employed to investigate the mechanism of chain growth. Insertion of CO into the Mg Altering the source of CO to a transition metal carbonyl complex changes the observed reactivity (Scheme 18b). 39 45 The Nheterocyclic carbene (NHC) ligands on the diboryne unit play an important role in determining the reactivity toward CO. 46 The stabilisation of the singlet ground state of the B 2 permits back-donation from the triple bond in B 2 into the π* antibonding of CO, allowing both binding and activation of CO akin to transition-metal reactivity. 47 Addition of excess CO to 77 affords the homologation product 76.
Summary and Perspective
Cooperative strategies have emerged as a key feature in the initiation and propagation events in CO homologation. Through the use of two or more metal centres, the discrete coordination, reduction, and finally C-C bond formation steps can be achieved. In many cases either proposed transition states or intermediates involve the cooperative action of two or more reactive sites with CO. This approach also provides a coordination framework made of multiple sites which can support the growing carbon chain.
The mechanism of any chain growth sequence using CO (or CO 2 ) as a C 1 building block to form C n chains can be conceptualised in terms of initiation, propagation, and termination events. In the long term, the detailed understanding of each of these events may allow for the development of methods to convert CO and CO 2 to higher-value products with greater selectivity and ultimately pave the way for catalytic methods.
The initial activation of CO often involves the formation of a metal-carbon bond. Formation of the metal-carbon bonds occurs most commonly through coordination of CO to a metalcentre. In a number of cases described herein, metal carbonyls have been computationally shown to be key intermediates. Isolable transition metal-carbonyl complexes have also been used as the CO source in homologation strategies. In the case of the latter approach, geometric and electronic restrictions on CO Please do not adjust margins Please do not adjust margins reactivity provide insight into the C-C bond formation process on heterogeneous F-T catalysts. Geometrically, a cis carbonyl relationship has been observed in numerous systems to be critical in transition-metal mediated chain-growth steps. Electronically, transition-metal bound CO units are shown to react with low-oxidation state compounds where atmospheric CO is inert.
Following the generation of a reactive organometallic intermediate from CO, the formation of C-C bonds occurs in the propagation stage, resulting in growth of a carbon chain. In almost all studies, this is studied computationally as these reactive intermediates are transient species and only the final chain-growth products are experimentally isolated and characterised.
For the model complexes described herein, the termination of chain-growth usually involves the formation of stable M-O bonds. Comparatively few studies have demonstrated the dissociation of the carbon chain from the metal sites. The onward reaction of the carbon chain and liberation from the metal fragments is essential for the development of catalytic processes. Trialkylsilyl halides have been used as hydrogen surrogates to liberate carbon chains from metal complexes. The hydrogenative coupling of CO-derived (bis)alkylidyne ligands reported by Suess and Peters is the only example to-date where dihydrogen is used to displace a transition-metal bound carbon chain. A detailed mechanistic understanding of the reactivity of dihydrogen with metal-bound carbon chains is currently unknown and also critical to both developing F-T models, as well as catalytic homogeneous chemistry.
As the field develops, we anticipate that more strategies will arise to couple CO units together at metal centres. The study of these numerous disparate systems will build clearer understanding of CO reactivity. Ultimately this understand has the potential to culminate in catalytic methodologies where the controlled coupling of CO to value-added products is achieved.
Conflicts of interest
There are no conflicts to declare. | 2020-06-11T09:06:32.237Z | 2020-06-12T00:00:00.000 | {
"year": 2020,
"sha1": "0619d6d7659116371a1a87a2129a7a79b3f4a8c0",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/dt/d0dt01564d",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "61ed9822751e3d37cd277db2468fc5043135453c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
232086484 | pes2o/s2orc | v3-fos-license | Nanodrug Delivery Systems Modulate Tumor Vessels to Increase the Enhanced Permeability and Retention Effect
The use of nanomedicine for antitumor therapy has been extensively investigated for a long time. Enhanced permeability and retention (EPR) effect-mediated drug delivery is currently regarded as an effective way to bring drugs to tumors, especially macromolecular drugs and drug-loaded pharmaceutical nanocarriers. However, a disordered vessel network, and occluded or embolized tumor blood vessels seriously limit the EPR effect. To augment the EPR effect and improve curative effects, in this review, we focused on the perspective of tumor blood vessels, and analyzed the relationship among abnormal angiogenesis, abnormal vascular structure, irregular blood flow, extensive permeability of tumor vessels, and the EPR effect. In this commentary, nanoparticles including liposomes, micelles, and polymers extravasate through the tumor vasculature, which are based on modulating tumor vessels, to increase the EPR effect, thereby increasing their therapeutic effect.
Introduction
Solid tumors are the major cause of death worldwide and their treatment remains a challenge [1][2][3]. Chemotherapy is one of the few treatment options available for metastasized tumors which cannot be removed surgically; however, the effectiveness of this therapeutic modality is not yet satisfactory [4]. This problem mainly stems from the lack of tumor selectivity by these agents; hence, the occurrence of severe adverse effects limits the usage of chemotherapy [5]. Nanomedicines have been designed to guide drugs more precisely to tumor cells and away from sites of toxicity. These agents have numerous theoretical advantages over low-molecular-weight drugs, including high drug loading, specific targeting, and the ability to protect the payload from degradation and release the drug in a controlled or sustained manner [6]. Theoretically, nanomedicines with larger particle size leak more slowly from blood vessels compared with most chemotherapy drugs. Fortunately, vascular leakage is a major feature of the vasculature of solid tumors. Specifically, tumor neovasculature has larger lumens and wider fenestrations (200 nm to 1.2 µm in diameter) due to its lack of a smooth muscle layer and pericytes [7]. When injected intravenously, nanomedicines ranging in size from 10 to 500 nm tend to circulate for a long time and can preferentially access the tumor tissue through the leaky tumor vasculature; subsequently, they are retained in the tumor bed due to reduced lymphatic drainage [8][9][10][11][12]. This pathophysiological phenomenon based on abnormal tumor angiogenesis to increase the delivery of nanomedicines in tumors is known as "the enhanced permeability and retention" (EPR) effect [10][11][12][13]. Matsumura and Maeda first reported the EPR effect in 2 of 26 1986 [11]. Follow-up studies rigorously verified that the EPR effect can be observed using macromolecules with an apparent molecular size >45 kDa (the threshold for renal clearance) and a longer plasma half-life. In recent years, Ding et al. conducted real-time research on human kidney tumors using X-ray computed tomography to confirm the existence of the EPR effect in humans. The results showed that the significant EPR effect can be found in >87% of human kidney tumors [14]. However, low-molecular-weight contrast agents do not stay in the tumor and can be washed out in a minute from tumor, which greatly differs from macromolecular drug retention in tumors. Therefore, Maeda et al. reported a more distinct method to prove the EPR effect in human by conjugating lipiodol with a macromolecular nanodrug [15]. This method lasts longer than X-ray computed tomography, and it can be used to further explore the significant difference between the EPR effect of macromolecular drugs and low-molecular-weight counterparts.
Nanodrug delivery is based on the accumulation of drugs in tumors due to the EPR effect, and the subsequent release of the therapeutic payload [11,16]. However, the EPR effect is inadequate in tumors; this inadequacy can be attributed to the high interstitial fluid pressure (IFP), the dense extracellular matrix (ECM), and the occluded or embolized tumor blood vessels [12,17,18]. Moreover, the prolonged circulation of the drug increases the ability of extravasation into the tumor through the EPR effect. Clinically, it has been demonstrated that the function of long-circulating liposomes, for example, doxorubicin (DOX)-loaded polyethylene glycol (PEG)ylated liposomes (Doxil), reduces opsonization and premature clearance, increases the blood circulation time, and potentially enhances drug accumulation in the tumor [19]. However, when the EPR effect is insufficient, the drug may extravasate and bring more toxicity into normal tissues. Thus, there is an urgent need to identify the physiological barriers that affect the EPR effect of tumors. The aim of such research would be the development of methods to enhance tumor penetration and retention, thereby improving tumor targeting and the therapeutic effect. In this review, we analyzed the barriers to drug delivery, focusing on the influence of tumor vasculature on the EPR effect. Moreover, we discussed the method utilized for the regulation of tumor blood vessels through the nanodrug delivery system to enhance the EPR effect [20][21][22][23][24].
Abnormal Vascular Functions Affect the Tumor EPR Effect
To satisfy the overgrowth of tumor cells, solid tumors need to induce and maintain a dedicated tumor blood supply, which is termed neovascularization. Under inflammatory or hypoxic tumor conditions, cells such as vascular endothelial cells release vascular permeability mediators, resulting in more enhanced tumor vascular permeability than in normal tissue, which can be demonstrated by angiography [25]. However, due to their short half-life and the rapid dilution in the bloodstream, these mediators mainly affect tumor vessels, but not normal tissue blood vessels. In such regions, macromolecules ranging from 10 to 500 nm (e.g., macromolecular anticancer agent, albumin, immunoglobulin, micelles, liposomes, and protein-polymer conjugates) can selectively leak out from the vascular bed and accumulate inside the interstitial space. However, in solid tumors, the EPR effect exhibits great heterogeneity. Tumors show different EPR effects regardless of their types and sizes, patients, or their developmental stages. Tumors with high blood vessel density (e.g., hepatocellular carcinoma) show a strong EPR effect, whereas others with low vascular density (e.g., pancreatic cancer) show a weak EPR effect [5]. Therefore, accurate monitoring and evaluation of the EPR effects in different tumors is essential for the development of personalized EPR-mediated plans for the treatment of tumors.
In principle, due to the widespread presence of EPR in tumors, nanomedicines based on the EPR effect show great promise for improving the efficacy of systemic anticancer drug therapy. However, their full anticancer potential has been hindered because of biological and pathophysiological barriers [26]. Obviously, the vascular system of tumors, which exhibit different vessel density, maturity, perfusion, and pore cutoff size, could be considered one of the main factors that affect the EPR effect [27]. In this review, we summarize the three main approaches through which abnormal tumor blood vessels affect the EPR effect and the related vascular mediators (Table 1). Overexpression in most solid tumors [28][29][30] Tumor necrosis factor (TNF)-α TNF-α mediates monocyte differentiation into angiogenic cells that support tumor angiogenesis. It is also a multipotent proinflammatory cytokine with vascular permeability activity, which can enhance vascular leakage by disrupting the EC adhesion junction VE-cadherin. [22,[31][32][33][34][35][36] Acidic fibroblast growth factor (FGF)/FGF-1 Interacts with receptor tyrosine kinase subtypes to induce EC proliferation and maintain tumor angiogenesis. [37] Basic FGF/FGF-2 Controls angiogenesis by inducing the expression of VEGF through paracrine and endocrine mechanisms.
[ [41][42][43] Placenta growth factor (PLGF) PLGF only binds to VEGFR-1 and induces tumor angiogenesis, promoting the survival of ECs in tumor-associated blood vessels. [44] Epidermal growth factor (EGF) A key EGF receptor (EGFR) ligand is one of many growth factors that drive the expression of VEGF. [45] Hepatocyte growth factor (HGF) Stimulates cell motility and the secretion of proteinases and plays an important role in tumor invasion and progression. [46] Hypoxia-inducible factor (HIF)-1α Upregulates VEGF gene expression by hypoxia response element binding to the promoter region of VEGF.
[ [47][48][49][50][51] Transforming growth factor (TGF) -β Induces strong VEGF production in recruited hematopoietic cells, leading to activated angiogenesis and vascular remodeling. Low TGF-β levels contribute to angiogenesis, and high levels of TGF-β can inhibit EC growth. [52][53][54] Induces angiogenesis indirectly by activating the expression of VEGF in smooth muscle cells. [55] IL-3 Stimulates EC movement and promotes the formation of new blood vessels in vivo. It also stimulates migration and proliferation of vascular smooth muscle cells. [56] IL-6 Regulates the synthesis of VEGF and influences tumor angiogenesis by inducing the production of VEGF. [57] IL-8 Enhances EC survival, proliferation, and matrix metalloproteinase production, and regulates angiogenesis. [58] Neuropilin 1 and 2 Regulates receptor-ligand interactions of the VEGF family. [59] Adrenomedullin Promotes angiogenesis, protects cells from apoptosis and vascular injury, and affects vascular tone and permeability. [60] Stromal cell-derived factor 1 (SDF-1), a chemokine Synergizes with VEGF to induce angiogenesis in human ovarian cancer tumors. Furthermore, in invasive breast cancer, stromal fibroblast-derived SDF-1 promotes angiogenesis by recruiting bone marrow-derived endothelial precursors. It plays an angiogenic role through the receptor CXC motif chemokine receptor type 4. [61] Endostatin Inhibits cell cycle control and antiapoptotic genes in proliferating ECs, thus inhibiting angiogenesis. [62] Integrin Adhesion molecules such as α 6 β 1 and α 6 β 4 integrins mediate VEGF-induced angiogenesis, which regulates the adhesion of ECs to the ECM, thereby promoting the migration and survival of tumor vasculature. Other integrins (e.g., α v β 3 , α v β 5 , and α 5 β 1 ) have also been shown to mediate angiogenesis. [63,64] Pigment epithelium-derived factor Inhibits angiogenesis via downregulation of VEGF. [65] Nuclear factor kappa-B (NF-κB) Activated NF-κB can bind to DNA, promote cell proliferation, regulate cell apoptosis, promote angiogenesis, and stimulate invasion and metastasis. [66] Thyroid hormone Thyroid hormones have proangiogenic effects on ECs and vascular smooth muscle cells initiated by integrin α v β 3 extracellular domain hormone cell-surface receptors.
[67] Involved in the process of angiogenesis through its proteolytic role in tissue remodeling, as well as the growth of new blood vessels and the release of angiogenic factors sequestered in the matrix. [68] Endogenous carbon monoxide (CO) and heme oxygenase (HO) Play an important role in regulating vascular tension and inducing angiogenesis, and can significantly increase vascular permeability and blood flow.
High expression in liver cancer, prostate cancer, renal cancer, and colorectal cancer [76,77] Vascular permeability VEGF (VEGF-A/B/C/D) As mentioned above.
Overexpression in most solid tumors [28][29][30] Bradykinin (BK) Activates EC-derived NO synthase, which leads to an increase in NO and plays a role in increasing vascular permeability. [78,79] Hydroxyprolyl3 BK As mentioned above. Advanced cancer [80][81][82] Inducible nitric oxide synthase (iNOS) and NO NO is an effective endothelium-derived vascular regulator, which plays an important role in vascular permeability, cell proliferation, and extravasation (EPR effect), inducing vasodilation and increasing blood flow.
Abnormal Angiogenesis
Angiogenesis is essential for the continuous growth and development of solid tumors. Tumor vessels provide oxygen and nutrients and remove waste products, supply a favorable niche for cancer stem cells, and serve as a conduit for tumor cell metastatic spread and immune cell infiltration. Unlike normal blood vessels, tumor blood vessels with abnormal structure and function impede the delivery of adequate and effective oxygen, as well as therapeutic drugs to cancer cells [88,89]. In cancer progression, the overexpression of proangiogenic factors drives the pathological angiogenesis. An imbalance between local proangiogenic and antiangiogenic factors may lead to the proliferation, migration, and new vessel formation of endothelial cells (EC). Furthermore, pericyte coverage of EC is often absent in the tumor vasculature. Compared with normal tissue with an organized microvasculature with regular branching order, the vascular organization of tumor tissue is disorganized and lacks the conventional hierarchy. Abnormal angiogenesis may lead to structural and functional abnormalities of the vascular system, which are often characterized by tortuous, unorganized, and excessive leakage [90,91]. This feature contributes to the vascular permeability of fluids and the escape of metastatic cancer cells [92,93]. Furthermore, the solid pressure generated by the proliferation of cancer cells compresses the blood and lymphatic vessels in the tumor, further impairing blood and lymphatic flows. These abnormal vascular structures collectively lead to an abnormal tumor microenvironment (TME), characterized by high IFP, hypoxia, and acidosis [88,94,95]. A physiological consequence of these vascular abnormalities is heterogeneity of tumor blood flow, which can interfere with the EPR effect and the uniform distribution of drugs within the tumor.
Tumor cells can promote blood vessel sprouting by releasing angiogenic molecules that bind to their respective receptors in adjacent cells or by paracrine signals [96,97]. Vascular endothelial growth factor (VEGF) appears to play the most critical role in physiological and pathological angiogenesis among all the known angiogenic molecules. It is overexpressed in the majority of solid tumors [28,29] and can promote the survival and proliferation of ECs, increase the display of adhesion molecules on these cells, and increase vascular permeability. By downregulating VEGF signaling in solid tumors, the vasculature may return to a more "normal" state, accompanied by decreased IFP, increased tumor oxygenation, and improved drug permeability in these tumors [98].
In addition to VEGF, other factors and proteins can also promote the abnormal formation of tumor blood vessels. Thus far, 28 proangiogenic factors/genes have been found to mediate tumor angiogenesis [76,77], including the fibroblast growth factor (FGF), hypoxiainducible factor (HIF), platelet-derived growth factor-B (PDGF-B), tumor necrosis factor-α (TNF-α), chemokines, integrins, and transforming growth factor-β (TGF-β), as well as their receptors [76,[99][100][101][102][103]. Acidic and basic FGF (FGF1 and FGF2) have the ability to induce angiogenesis [39]. FGFs stimulate the proliferation and migration of ECs, as well as the production of collagenase and plasminogen activator (PDGF), which stimulate angiogenesis and are related to the aging process of the tumor vasculature in vivo [42,43]. TGF-β possesses dual pro-and antiangiogenic properties. At low levels, TGF-β participates in the switch of angiogenesis by upregulating angiogenic factors and proteinases. At high levels, it can inhibit EC growth, stimulate the differentiation and recruitment of smooth muscle cells, and promote the reorganization of the basement membrane [52]. Moreover, as effective angiogenic factors, chemokines can induce the migration and proliferation of ECs, and they have pro-or antiangiogenic activities [104]. As an angiogenic factor, HIF cooperates with TNF inhibitors to initiate angiogenesis under hypoxic conditions [48][49][50][51]. It activates the signaling pathway and upregulates the expression of VEGF. Growth factors generated by this pathway activate the mitogen-activated protein kinase and protein kinase B signaling pathways, leading to increased levels of HIF-1 protein, thereby promoting tumor angiogenesis. Adhesion molecules (e.g., α 6 β 1 and α 6 β 4 integrins) mediate VEGFinduced angiogenesis, which regulates the adhesion of ECs to the ECM, thereby promoting the migration and survival of tumor vasculature. Other integrins (e.g., α v β 3 , α v β 5 , and α 5 β 1 ) also mediate angiogenesis [63,64].
Irregular Blood Flow
Compared with normal vessels, newly formed tumor vessels are irregular or inconsistent [87]. It has been reported that tumor vessels are insensitive to angiotensin receptor type 2 (AGTR2). In addition, there is intermittent flow (only one flow in 15-20 min) and reverse flow of blood at the tumor site [105,106]. Moreover, blood often flows in the opposite direction. Irregular blood flow in the tumor is usually caused by irregular vascular structure. Unlike normal tissues, angiogenic factors in tumors at the late stage of vascular maturation will continue to be activated, leading to vascular abnormalities, which are characterized by irregular vascular structure and spatiotemporal heterogeneity [107]. Tumor vessels with irregular structure are characterized by a curved vascular shape, filling of the EC septum, and damage of the basement membrane. These effects lead to distortion of the vascular morphology and high permeability of the vascular EC space [31,[108][109][110]. The distortion of blood vessels increases the geometric resistance of blood flow. The high permeability of blood vessels increases the hematocrit of tumor blood, thus increasing the blood viscosity [111]. In addition, the phenomenon of rapid proliferation of tumor cells in a finite space and excessive deposition of ECM can lead to large solid stress between adjacent cells and matrix components. The continuous accumulation of solid stress can lead to the compression of tumor blood vessels and the reduction of cross-sectional area and pressure difference in the direction of blood vessels [112]. The increase in vascular resistance and blood viscosity and the compression of accumulated solid stress significantly increases the resistance to blood perfusion. The increased resistance of tumor vessels to blood perfusion results in a low blood perfusion rate and a slow blood flow rate [113]. The change in blood flow velocity on the transport of nanoparticles through blood vessels has been investigated. A computer simulation explained the effect of blood flow velocity on the transport of nanoparticles. The results showed that the pressure at the vessel wall and the pressure gradient between the vascular wall and interstitial tissue increase in turn with the increase of fluid velocity in the vascular domain. Moreover, the trans-vascular transport efficiency of nanoparticles initially increases and subsequently decreases [114]. In addition, driven by the difference in pressure along the vascular direction, blood perfusion has the characteristics of convection-diffusion. Convection-diffusion differs between tumor blocks and depends on the local pressure gradient and flow resistance due to the heterogeneity of tumor blood vessels [115].
In addition to an irregular structure, the abnormal blood vessels of tumors also exhibit spatiotemporal heterogeneity [116,117]. This heterogeneity indicates the differing distribution of tumor vessels in various parts of the tumor or during the proliferation period. This is mainly indicated by the fact that the distribution of vessels in the periphery of the tumor is usually very rich, while their extension into the interior of the tumor gradually decreases. Therefore, this uneven distribution complicates the delivery of nanodrugs to the tumor center, which seriously hinders the penetration and extravascular transport of such agents. Of note, the high heterogeneity of tumor vessels in experimental mice and humans reduces the antitumor effects of some nanomedicines [26,118].
Extensive Vascular Permeability
Increased vascular permeability is widely found in endothelium discontinuous tumor vessels such as neovessels and immature vessels, as well as in other pathological tissues with disturbed vascular function. Compared with normal blood vessels, macromolecular drugs can reach the tumor stroma through the leaky vessel wall with large pores without hindrance [12]. However, excessive vascular leakage can cause plasma escape and hemoconcentration. This results in flow stasis and high IFP, which greatly hinder the extravasation of drugs and their movement to the tumor parenchyma. Furthermore, deposited clots of fibrin transiently promote the formation of blood vessels and ECM and prevent the penetration of antitumor therapeutic agents. The vascular media affecting the tumor vascular permeability are summarized below.
Bradykinin (BK) is of great importance in elevating the permeability of inflammatory sites and tumor tissues, thereby maintaining tumor growth [79,81]. Overexpression of BK receptors in solid tumors has been observed, resulting in defective vascular architecture with large intracellular gaps [119]. Kinin can activate EC-derived nitric oxide (NO) synthase, leading to increased levels of NO, a well-established and effective endothelium-derived vascular modulator [85,120,121]. NO is of great significance in vascular permeability, cell proliferation and extravasation (EPR effect), blood vessel dilation, and elevation of blood flow [83,84]. For example, NO generated from L-arginine under the action of NO synthase induces tumor vascular permeability. It has been demonstrated that the inhibition of NO generation can decrease vascular permeability, thereby weakening the EPR effect. This further confirms that NO is inextricably linked to vascular permeability in solid tumors [84,85]. Prostaglandins E1 and I2 are commonly involved in inflammation and cancer, exert similar effects to those of NO, and can enhance extravasation and EPR effects [83,86]. In summary, vascular permeability in tumors is often directly or indirectly related to kinins.
In addition, it has been shown that several vascular mediators, such as vascular permeability factor (VPF), which is important in tumor angiogenesis, TNF-α, and others elevate the vascular permeability of tumors [31]. EC survival and vascular permeability are closely related to the level of VPF/VEGF, as increasing this level can lead to upregulation of the corresponding receptors on ECs. [34,35]. TNF-α, a multifunctional proinflammatory cytokine with vascular permeabilizing effects [22], can enhance vascular leakiness via disrupting the EC adherence junction vascular endothelial cadherin [36]. TNF-α can increase the sensitivity to nanoparticles through serving as a vascular disrupting agent (VDA). At low levels, TNF-α may promote angiogenesis; however, at higher concentrations, it destroys the tumor vessels and increases the accumulation of drug in tumors [122].
Nanoparticles for Enhancing the Tumor EPR Effect
The EPR effect is an effective way for nanoparticles to passively target tumor cells. As opposed to passive drug targeting, nanoparticles based on the use of targeting ligands are termed "active drug targeting". Actively targeted nanomedicines have failed to demonstrate benefit at the clinical level. This failure can be attributed to the fact that nanomedicines may face an insufficient endothelial vascular gap and a number of physiological barriers, such as high cellular density within solid malignancies and high IFP. Consequently, actively targeted nanoparticles have difficulties in identifying target cells due to the inadequate EPR effect. Therefore, enhancing the EPR effect through the use of nanoparticles can provide a better platform for subsequent treatment by elevating blood pressure, or conjugating with antibodies or EPR enhancers such as NO-generating agents. Several techniques have been employed to enhance the EPR effect, including the inhibition of angiogenesis, upregulation of tumor blood perfusion, and disruption of vascular or enhancement of vessel penetration to modulate the tumor vasculature [15,79,109,123,124]. Moreover, Ojha et al. described several pharmacological strategies for vascular regulation (Figure 1). Combined with nanoparticles, these strategies can enhance the EPR effect and improve treatment (
Antiangiogenesis
VEGF, FGF and their receptors, matrix metalloproteinases (MMPs), tubulin, a integrins are closely related to tumor survival, migration, metastasis, and angiogene [49,142,143]. It has been reported that drugs targeting these factors can inhibit tumor a giogenesis, thereby increasing blood perfusion and reducing the IFP [21,98,144]. Antia giogenic agents, to some extent, can restore the pressure gradient between the vascu wall and tumor interstitium. Subsequently, they decrease the blood flow stasis to allo more nanoparticles to penetrate the blood vessels and reach the interstitial tiss [29,98,145]. Hence, antiangiogenesis improves the delivery of the therapeutic entities v maintaining the integrity of the EPR effect and reducing the IFP. Numerous differe types of nanoparticles have been extensively investigated to facilitate the delivery antiangiogenic agents [62,125,127].
Several studies have shown the potential effectiveness of soluble VEGF receptors inhibiting pathological tumor angiogenesis. Nanoparticles are able to carry VEGF inh itors to vascular EC. These inhibitors block pathological angiogenesis and promote
Antiangiogenesis
VEGF, FGF and their receptors, matrix metalloproteinases (MMPs), tubulin, and integrins are closely related to tumor survival, migration, metastasis, and angiogenesis [49,142,143]. It has been reported that drugs targeting these factors can inhibit tumor angiogenesis, thereby increasing blood perfusion and reducing the IFP [21,98,144]. Antiangiogenic agents, to some extent, can restore the pressure gradient between the vascular wall and tumor interstitium. Subsequently, they decrease the blood flow stasis to allow more nanoparticles to penetrate the blood vessels and reach the interstitial tissue [29,98,145]. Hence, antiangiogenesis improves the delivery of the therapeutic entities via maintaining the integrity of the EPR effect and reducing the IFP. Numerous different types of nanoparticles have been extensively investigated to facilitate the delivery of antiangiogenic agents [62,125,127].
Several studies have shown the potential effectiveness of soluble VEGF receptors on inhibiting pathological tumor angiogenesis. Nanoparticles are able to carry VEGF inhibitors to vascular EC. These inhibitors block pathological angiogenesis and promote tumor cell apoptosis, thereby inhibiting tumor growth and metastasis. Although nanoparticles are potentially applicable to antiangiogenesis, better delivery carriers that can improve the targeting activity are urgently sought. The arginylglycylaspartic acid (RGD) peptide can specifically bind to the integrin receptor of tumor vascular ECs with high affinity [146,147]. Grafting RGD onto nanoparticles may improve their active targeting ability and increase the drug transfection efficiency under conditions of sufficient EPR. However, Storm et al. stated that the potential of RGD-conjugate tumor targeting should not be overestimated due to the RGD receptors being widely distributed on blood vessels, which can induce the less tumor selectivity [148,149].
Some RNA interference (RNAi) strategies that require entry into tumor cells to function, such as small interfering RNA (siRNA) and short hairpin RNA (shRNA), are ideal for tumor-specific VEGF inhibition. The strategy of silencing VEGF by RNAi has achieved satisfactory results in some solid tumor models [150][151][152][153][154]. The angiogenesis of VEGF is mediated by binding to two endothelium-specific receptor tyrosine kinases with high affinity, namely, FLT1 (VEGFR1) and FLK1/KDR (VEGFR2). The use of homologous tyrosine kinase receptor soluble FLT1 (sFLT1) gene therapy has illustrated that the transduced sFLT1 protein can bind to VEGF and inhibit its activity, and this binding is similarly characterized by high affinity. Kim et al. reported an angiogenic EC-targeted polymeric gene vehicle, polyetherimide-g-polyethylene glycol (PEG)-RGD, which contained sFLT1 protein and siRNA [125,126]. These nanoparticles can effectively transfer therapeutic genes to angiogenic ECs, but not to nonangiogenic cells, and effectively inhibit the proliferation of VEGF-responsive ECs by the delivered genes. Kanazawa et al. prepared the amphiphilic and cationic triblock copolymer as an siRNA carrier to efficiently deliver small interfering VEGF into tumor tissues and significantly inhibit tumor growth because of the suppression of VEGF secretion from tumor tissues [155].
Some other vascular mediators are also involved in tumor angiogenesis. Targeting these mediators can also effectively inhibit abnormal tumor angiogenesis. Endostatin, a peptide cleaved from the carboxy terminus of collagen XVIII, suppresses the cell cycle and expression of antiapoptosis genes in proliferating ECs, thereby suppressing angiogenesis. To assess the endostatin gene therapy, Oga et al. prepared polyvinylpyrrolidonepentostatin nanoparticles which exhibit a strong antiangiogenic effect and effective inhibition of metastatic growth in the brain [62]. Moreover, the combined use of sFLT1 with endostatin could be an effective antiangiogenic approach to the treatment of unresectable hepatocellular carcinoma [156]. Pigment epithelium-derived factor is a type of glycoprotein that plays a universally acknowledged role in the inhibition of angiogenesis via downregulation of VEGF [65]. The cyclic RGD-PEG-polyetherimide exhibited increased gene transfection efficiency in human umbilical vein ECs via binding to α v β 3 , and significantly inhibited tumor growth and angiogenesis [157]. The binding of activated NF-κB to DNA can promote angiogenesis in addition to its role in facilitating cell proliferation, regulating apoptosis, facilitating angiogenesis, and stimulating invasion and metastasis [66]. Xiao et al. inhibited the growth and metastasis of breast cancer through delivering p65 shRNA into cells with a bioreducible polymer to block the signaling of NF-κB [158]. The proangiogenic effects of thyroid hormone on ECs and vascular smooth cells are initiated from the cell surface receptor for the hormone on the extracellular domain of integrin α v β 3 [67]. Tetraiodothyroacetic acid (tetrac) is a deamination product of L-thyroxine that blocks thyroid hormone binding with the integrin receptor [159]. Therefore, tetrac combined with liposomes and poly(lactide-co-glycolic acid) nanoparticles can achieve tetrac targeting of cell membrane integrin α v β 3 receptors and significantly inhibit angiogenesis [127][128][129][130]. MMPs participate in the process of angiogenesis in tissue reconstruction and neovascular growth through their proteolytic effect. Moreover, they release angiogenic factors residing in the matrix. Therefore, MMP inhibitors decrease angiogenesis and the migration of tumor cells, leading to slower progression of transplanted tumors [68]. Indeed, the antitumor efficacy of angiostatin and tissue inhibitor of metalloproteinases (TIMPs) has been demonstrated in various types of solid tumors [160,161]. Dendrimers containing plasmids of angiostatin and TIMP-2 showed high antitumor and antiangiogenic activity [162]. Nevertheless, antiangiogenic drugs also reduce the gap between tumor vascular ECs. Hence, the size of nanoparticles has to be strictly controlled if antiangiogenic drugs are employed to enhance the EPR effect [114].
Upregulated Tumor Blood Perfusion
The main obstacle of blood perfusion in intravascular transport is due to irregular vascular structure and accumulated solid stress. Therefore, in accordance with the above two points, the blood perfusion of tumor vessels can be upregulated by vascular normalization and decompression, respectively ( Figure 2). Yang et al. concluded that the former can use angiogenesis inhibitors to improve blood perfusion, so as to reduce the transport resistance of nanoparticles [115,163]. The latter can effectively reduce the solid stress through ablation of cells or the ECM, thus increasing the diameter of blood vessels to promote intravascular transport. Reduced blood flow directly limits the perfusion of nanoparticles into the tumor site [164]. In addition, the proliferating cancer cells in the center of the tumor tissue will form excessive pressure and compress the blood vessels and lymphatic vessels, leading to vascular collapse [88,94,95]. This results in an abundance and scarcity of functional blood vessels and lymphatic vessels in the periphery and center of the tumor, respectively [165]. This uneven distribution of blood vessels further worsens the relatively weak penetration ability of nanoparticles.
tumors [160,161]. Dendrimers containing plasmids of angiostatin and TIMP-2 sho high antitumor and antiangiogenic activity [162]. Nevertheless, antiangiogenic d also reduce the gap between tumor vascular ECs. Hence, the size of nanoparticles h be strictly controlled if antiangiogenic drugs are employed to enhance the EPR e [114].
Upregulated Tumor Blood Perfusion
The main obstacle of blood perfusion in intravascular transport is due to irreg vascular structure and accumulated solid stress. Therefore, in accordance with the ab two points, the blood perfusion of tumor vessels can be upregulated by vascular malization and decompression, respectively (Figure 2). Yang et al. concluded that former can use angiogenesis inhibitors to improve blood perfusion, so as to reduce transport resistance of nanoparticles [115,163]. The latter can effectively reduce the s stress through ablation of cells or the ECM, thus increasing the diameter of blood ve to promote intravascular transport. Reduced blood flow directly limits the perfusio nanoparticles into the tumor site [164]. In addition, the proliferating cancer cells in center of the tumor tissue will form excessive pressure and compress the blood ve and lymphatic vessels, leading to vascular collapse [88,94,95]. This results in an a dance and scarcity of functional blood vessels and lymphatic vessels in the periphery center of the tumor, respectively [165]. This uneven distribution of blood vessels fur worsens the relatively weak penetration ability of nanoparticles. [115].
Vascular promotion is a vascular regulation strategy that addresses the issue of accumulation and distribution of drugs in tumors via increasing the vascular density upregulating blood perfusion. Induction of angiogenesis appears to promote tu growth. However, moderate induction of angiogenesis or vascular promotion may Vascular promotion is a vascular regulation strategy that addresses the issue of poor accumulation and distribution of drugs in tumors via increasing the vascular density and upregulating blood perfusion. Induction of angiogenesis appears to promote tumor growth. However, moderate induction of angiogenesis or vascular promotion may also contribute to better enrichment and distribution of anticancer drugs and improve their anticancer efficacy in some tumor models [20]. Among the recently developed strategies, the use of vasodilator-encapsulated nanoparticles for tumor angiectasis has been investigated as a potential option for promoting the extravasation of nanoparticles in tumors. Some vasodilator formulation nanoparticles have been employed, including angiotensin inhibitor, antihypertensive agents, gaseous vascular mediator-generating vasodilators, and ECM degradation agents.
The change of angiotensin I to angiotensin II mediated via carboxypeptidase can be inhibited by angiotensin-converting enzyme inhibitors (ACEI). AGTR2 is an effective agent in enhancing blood flow and promoting vascular permeability in tumors due to its vasoconstrictive function in healthy tissues, as well as increasing the systemic blood pressure. It has been shown that the perfusion of tumor vessels is gradually shifted from poor to good after slow systemic administration of AGTR2 [87]. An increase in BK levels leads to the activation of endothelial NO synthase. ACEIs (e.g., captopril) inhibit the degradation of BK, thereby increasing its local concentration in tumor tissues. Captopril, an ACEI, acts by downregulating the expression of AGTR2, thereby dilating blood vessels and lowering blood pressure. A combination of captopril with paclitaxelloaded nanoparticles has been employed to simultaneously ameliorate tumor perfusion and expand EC gaps, thus enhancing nanodrug delivery to cancer cells [132]. Meanwhile, losartan is an angiotensin II receptor antagonist that increases nanodrug delivery through two mechanisms [166]. Losartan can lower solid stress that compresses blood vessels, thus improving vessel perfusion and drug delivery. However, it also increases the intratumoral penetration of the intraperitoneally or intravenously injected nanoparticles into the tumors by decreasing the ECM [167].
In addition to AGTR2, other drugs may also expand blood vessels. Hydralazine (HDZ), a drug applied to hypertension and heart failure therapy, has been used as a tumor vasodilator to modulate the TME. It is thought that HDZ functions by dilating blood vessels. Therefore, Chen et al. prepared HDZ-encapsulated liposomes which can expand tumor vessels and strengthen tumor permeability. These liposomes also ameliorated the accumulation and permeation of nanoparticles inside the tumor. Compared with free HDZ, intravenous injection of these liposomes in desmoplastic tumor-bearing mice prolonged the blood circulation time of HDZ. Moreover, its vasodilation effect increased the penetration and accumulation of nanoparticles into tumors mediated by the EPR effect to some extent [131]. Of note, in vivo and in vitro studies have shown that HDZ exerts certain antiangiogenesis effects [168]. Therefore, such nanomedicines have great potential in upregulating tumor blood perfusion. Sildenafil, a conventional medicine utilized for pulmonary hypertension therapy, can be utilized for developing effective and tumor-selective angiectasis approaches. Sildenafil can be encapsulated into the hydrophobic core of a cisplatin-incorporated polymeric micelle to form a nanoparticle with a hydrophobic center and a dense PEG shell. This polymeric micelle is effective in dilating tumor vessels and boosting the accumulation of cisplatin-sildenafil coloaded nanoparticles in tumors [133].
Endogenous signal molecule endogenous carbon monoxide (CO) and heme oxygenase (HO) play an important role in regulating vascular tension and inducing angiogenesis [69,70]. Fang et al. clearly demonstrated that vascular permeability and blood flow were significantly increased after using CO donors or HO-1 inducers (PEGylated heme) [72]. They designed two CO generators with tumor selectivity. The first was the CO external donor tricarbonyldichlororuthenium (II) dimer nanomicelle, which can slowly enhance the release characteristics and selective accumulation of tumors mediated by the EPR effect [71]. The second was the HO-1 inducer (PEGylated hemin), which can be selectively enriched in tumors after injection and produce CO by inducing HO-1 expression in tumors [72,73]. In solid tumor models, both nanodrugs exhibited higher selectivity for CO production in tumor tissues versus normal tissues, which resulted in augmented tumor blood flow recovery [72].
Platelets preserve tumor vessel integrity and prevent nanomedicines from diffusing into solid tumors. Previous findings have shown that the specific depletion of tumorassociated platelets may be a potent approach for disrupting vascular barriers and enhancing the extravasation of nanoparticles from tumor vessels [169][170][171]. Nie et al. showed that drug delivery can be facilitated via functionalizing nanoparticles, thereby locally depleting tumor-associated platelets that normally restore the leaky vessels [172]. They developed a polymer lipid peptide nanoparticle core consisting of a charged amphiphilic polymer where the positively charged region adsorbs the antibody R300. The platelet-specific R300 antibody can bind to platelets, leading to their micro-aggregation and subsequent removal by macrophages, and further increasing the intratumoral accumulation and retention of drugs.
Excessive constituents of ECM, such as collagen, fibrin, laminin, elastin, and aggregated platelet in the TME, deposit in the tumor vessels [173]. This hinders the blood supply, impairing the delivery of drugs to the tumor site and reducing efficacy. However, degradation of ECM components by enzymatic treatment (e.g., collagenase) can improve the vascular properties and upregulate blood perfusion at tumor sites [174,175]. Tissue plasminogen activator (t-PA) binds to drug carriers to degrade fibrin. Mei et al. developed t-PA-assembled redox-active nanoparticles (T-PA@iRNP) by degrading fibrin to reduce the pressure on tumor blood vessels, thereby increasing the perfusion of blood and nanomedicines in tumors ( Figure 3). When applied to colon cancer models, T-PA@iRNP degradation of deposited fibrin enhances the infiltration of iRNP and immune cells into tumor tissues through an increase in blood flow. This enhances the EPR effect and consequently amplifies the inhibitory effect on tumor growth [134]. Zhang et al. encapsulated DOX and near-infrared spectroscopy-activated losartan in hollow mesoporous Prussian blue nanoparticles to degrade the ECM. The results showed that losartan-containing nanoparticles can enhance the penetration of DOX, and exhibit a good tumor inhibition effect under the synergistic action of photothermal therapy/chemotherapy [135]. J. Pers. Med. 2021, 11, 124 13 of 26 Platelets preserve tumor vessel integrity and prevent nanomedicines from diffusing into solid tumors. Previous findings have shown that the specific depletion of tumor-associated platelets may be a potent approach for disrupting vascular barriers and enhancing the extravasation of nanoparticles from tumor vessels [169][170][171]. Nie et al. showed that drug delivery can be facilitated via functionalizing nanoparticles, thereby locally depleting tumor-associated platelets that normally restore the leaky vessels [172]. They developed a polymer lipid peptide nanoparticle core consisting of a charged amphiphilic polymer where the positively charged region adsorbs the antibody R300. The platelet-specific R300 antibody can bind to platelets, leading to their micro-aggregation and subsequent removal by macrophages, and further increasing the intratumoral accumulation and retention of drugs.
Excessive constituents of ECM, such as collagen, fibrin, laminin, elastin, and aggregated platelet in the TME, deposit in the tumor vessels [173]. This hinders the blood supply, impairing the delivery of drugs to the tumor site and reducing efficacy. However, degradation of ECM components by enzymatic treatment (e.g., collagenase) can improve the vascular properties and upregulate blood perfusion at tumor sites [174,175]. Tissue plasminogen activator (t-PA) binds to drug carriers to degrade fibrin. Mei et al. developed t-PA-assembled redox-active nanoparticles (T-PA@iRNP) by degrading fibrin to reduce the pressure on tumor blood vessels, thereby increasing the perfusion of blood and nanomedicines in tumors ( Figure 3). When applied to colon cancer models, T-PA@iRNP degradation of deposited fibrin enhances the infiltration of iRNP and immune cells into tumor tissues through an increase in blood flow. This enhances the EPR effect and consequently amplifies the inhibitory effect on tumor growth [134]. Zhang et al. encapsulated DOX and near-infrared spectroscopy-activated losartan in hollow mesoporous Prussian blue nanoparticles to degrade the ECM. The results showed that losartan-containing nanoparticles can enhance the penetration of DOX, and exhibit a good tumor inhibition effect under the synergistic action of photothermal therapy/chemotherapy [135].
Enhanced Vessel Penetration
The gap in tumor vascular ECs is one of the important bases of the EPR effect. However, in some tumors with poor permeability, the size and rate limitation of large-scale nanodrugs by the vascular endothelial space cannot achieve the purpose of trans-vascular transport [115]. Therefore, augmenting the permeability of tumor vessels and even destroying the vascular system can effectively promote extravasation [176,177]. Nanodrugs with a size <10 nm can effectively permeate inside tumors via trans-and extravascular transport. However, the rapid clearance by the kidneys is a problem, resulting in an insufficient EPR effect. Nanodrugs, with size ranging 50-200 nm, can realize long-time circulation and passively target the tumor site by intravascular transport. However, due to the existence of various barriers, they often have difficulty in reaching the core of the tumor. Therefore, nanodrugs of variable size can be used to simultaneously achieve long-time circulation, good passive targeting, and high permeability [178][179][180][181][182][183].
Integrating VDAs in nanomedicines is a promising therapy for meliorating vascular permeability and the EPR effect. Several VDAs have been evaluated; for example, combretastatin A4 phosphate (CA4P) is a tubulin-binding agent which induces vessel disruption by suppressing tubulin polymerization. Furthermore, flavonoid acetic acid-based agent 5,6-dimethylxanthenone-4-acetic acid (DMXAA) increases the levels of NO and serotonin, resulting in weak endothelial function. Sengupta et al. introduced poly(lactide-co-glycolic acid) nanoparticles conjugated to DOX, which were trapped in a phospholipid blockcopolymer membrane containing CA4P [136]. The nanoparticles were designed to first release CA4P, which initially induces vessel disruption, thereby creating a niche for the release of DOX. This approach was linked to significant tumor inhibition and improvement in overall survival. VDAs and other physiological agents are commonly used to enhance vascular permeability and, thus, promote the extravasation of nanoparticles. Zhang et al. developed a bioinspired nanodesign, which combined vasculature-destructive DMXAA and hypoxia-activated tirapazamine with a mesoporous silica nanoparticle core, as well as a hidden platelet membrane shell [137]. The platelet membrane can be continuously "recruited" by the tumors with characteristics of artificial blood vessel destruction. The results indicated that disruption of the tumor vasculature caused by DMXAA and the platelet membrane-mediated targeting of the intratumoral disrupted vasculature were beneficial to each other and strengthened mutually. Studies have shown that the EPR effect of nanoparticles is induced by rupture of blood vessels, which is closely related to tumor density and the speed of blood flow [176]. As mentioned above, the tumor vasculature consists of only a single layer of ECs with a missing or incomplete basement membrane [184]. Furthermore, the vasculature is closely related to the blood supply of tumor cells [185,186]. Destruction of the vasculature can significantly improve the EPR effect and, if the vasculature is inadequate, the tumor tissue will undergo programmed death [187][188][189].
The development of strategies for interacting with ECs or destroying vascular EC connections is another effective approach to improving vascular permeability. Inspired by this, Palomba et al. transferred the purified leukocyte membrane onto nanoporous silicon particles to produce a type of leukolike vector (LLV) [138]. Multiple receptors on LLV can interact with ECs and reduce the vascular barrier function. The investigators also demonstrated that the leukocyte plasma membrane on the surface of LLV can effectively interact with the overexpressed intercellular adhesion molecule-1 (ICAM-1) in the tumor vasculature, activate the endothelial receptor ICAM-1 pathway, and boost vascular permeability through the phosphorylation of vascular endothelial cadherin. Li et al. found that phase-induced size expansion through radiofrequency-assisted gadofullerene nanocrystals (GFNCs) can destroy abnormal tumor vasculature. Biocompatible GFNCs with a nanoparticle size were designed to penetrate the leaking tumor blood vessel. With the assistance of radiofrequency, the phase transition occurs when GFNCs spill over the tumor vessels. In addition, the abrupt and drastic changes in nanoparticle structure caused by phase transition directly disrupt the abnormal tumor blood vessels (Figure 4). Treatment with this method can cause rapid ischemia, necrosis, and atrophy of tumor tissues, while significantly reducing the toxic and side effects of other antivascular treatments [139,190]. J. Pers. Med. 2021, 11, 124 15 of ( Figure 4). Treatment with this method can cause rapid ischemia, necrosis, and atrop of tumor tissues, while significantly reducing the toxic and side effects of other an vascular treatments [139,190]. [139,190].
In addition to chemotherapy, some physical therapies can also significantly enhan blood vessel penetration and improve the effects of antitumor treatment. Ionizing irr diation can increase vascular leakiness by inducing EC apoptosis and enhancing the e pression of VEGF and FGF [191]. Liang et al. designed a radioisotope therapy by enca sulating the radioisotope iodine-131 ( 131 I)-labeled bovine serum albumin (BSA) in lip somes. 131 I-BSA-liposomes were intravenously injected into 4T1 tumor-bearing mi Compared with untreated mice, those treated with 131 I -BSA-liposomes showed high tention in the tumor site, demonstrating enhanced tumor vascular permeability and im proved EPR effect [140]. Koukourakis et al. underlined the value of combining rad therapy with drug delivery systems based on nanomedicines [192]. Patients were treat with radiolabeled PEGylated liposomal DOX, and achieved an overall remission ra >70%. This is an effective anticancer treatment modality for inducing hyperthermia tumors. This generally leads to an increase in blood flow and vascular permeability tumors, thus promoting drug and oxygen supply to tumors [193]. Hyperthermia can applied to increase the EPR effect, particularly in nonleaky tumors with low baseli levels of nanomedicine accumulation [194]. Temperature-sensitive liposomes have d veloped into an ideal nanocarrier for coadministration with hyperthermia, enabli triggered drug release locally at the heated tumor site. Several studies have demonstrat that drug delivery and intratumoral distribution can be ameliorated through combini temperature-sensitive liposomes with modest hyperthermia. It was found that the h man ovarian carcinoma tumor model was rather impermeable to liposomes with a size Figure 4. Schematic diagram of the mechanism of tumor vascular rupture after radiofrequencyassisted gadofullerene nanocrystal (GFNC) treatment. GFNC particles injected intravenously into tumor-bearing mice penetrate the vulnerability of tumor vascular ECs. When radiofrequency irradiation is applied, the sudden volume expansion of GFNCs can lead to the destruction of vascular endothelial cadherin at the junction of endothelial adhesion bodies of tumor vessels, thereby increasing vascular permeability and realizing the destruction of tumor vessels. Reproduced from Li and Zhen et al. [139,190].
In addition to chemotherapy, some physical therapies can also significantly enhance blood vessel penetration and improve the effects of antitumor treatment. Ionizing irradiation can increase vascular leakiness by inducing EC apoptosis and enhancing the expression of VEGF and FGF [191]. Liang et al. designed a radioisotope therapy by encapsulating the radioisotope iodine-131 ( 131 I)-labeled bovine serum albumin (BSA) in liposomes. 131 I-BSA-liposomes were intravenously injected into 4T1 tumor-bearing mice. Compared with untreated mice, those treated with 131 I -BSA-liposomes showed high retention in the tumor site, demonstrating enhanced tumor vascular permeability and improved EPR effect [140]. Koukourakis et al. underlined the value of combining radiotherapy with drug delivery systems based on nanomedicines [192]. Patients were treated with radiolabeled PEGylated liposomal DOX, and achieved an overall remission rate >70%. This is an effective anticancer treatment modality for inducing hyperthermia in tumors. This generally leads to an increase in blood flow and vascular permeability in tumors, thus promoting drug and oxygen supply to tumors [193]. Hyperthermia can be applied to increase the EPR effect, particularly in nonleaky tumors with low baseline levels of nanomedicine accumulation [194]. Temperature-sensitive liposomes have developed into an ideal nanocarrier for coadministration with hyperthermia, enabling triggered drug release locally at the heated tumor site. Several studies have demonstrated that drug delivery and intratumoral distribution can be ameliorated through combining temperature-sensitive liposomes with modest hyperthermia. It was found that the human ovarian carcinoma tumor model was rather impermeable to liposomes with a size of 100 nm at room temperature. However, as the temperature increased, the release of liposomes was significantly elevated [195]. Manzoor et al. established temperature-sensitive liposomes containing DOX, which can enhance blood vessel penetration and liposome accumulation [141].
Conclusions and Future Perspectives
The EPR effect, which involves the pathophysiological mediators and unique anatomical architecture of tumor tissues, is becoming a promising avenue for targeted anti-tumor therapy. Thus, the tumor-selective delivery of anticancer nanomedicines based on the EPR effect is becoming possible. However, the EPR effect can be highly heterogeneous. Specifically, in the complex tumor environment, it is difficult for nanoparticles to diffuse into vascular areas of the tumor due to high IFP, abnormal ECM, and massive interaction sites in the tumor. Hence, in the last couple of years, on the basis of the EPR effect, scientists have investigated other mechanisms of nanoparticle entry into solid tumors [196]. Recently, Sindhwani et al. proposed that most of the tumor vasculature is continuous and does not have sufficient EC gaps to explain the accumulation of nanoparticles in tumors. Moreover, they stated that most nanoparticles can reach the interior of the tumor via active trans-endothelial pathways rather than passive transport via gaps [197]. Although they found that the trans-endothelial pathways play a significant role in the accumulation of nanoparticles in tumor sites, their experimental method had certain limitations. Firstly, they only utilized PEGylated gold nanoparticles as simulated nanoparticles to examine the accumulation in the tumor, and could not cover the accumulation of other nanoparticles in tumors. Secondly, they used a Zombie mouse model to distinguish the contribution of the passive gap from active trans-endothelial transport. This model could deactivate active mechanisms and retain the passive way that fixed the mouse by transcardiac perfusion and relied on a peristaltic pump to retain a physiologically relevant flow rate. This could not simulate the blood vessels and blood flow under normal physiological conditions. Lastly, the blood driven by the peristaltic pump only circulated for a short period of time (15 and 60 min). In summary, trans-endothelial pathways may be a reason for the accumulation of nanoparticles in tumor sites; nevertheless, the EPR effect remains the basis of nanodrug delivery to tumors. Furthermore, nanoparticles which can improve tumor vessel penetration, reduce IFP, and degrade the ECM can be applied to enhance the EPR effect [26].
Herein, we summarized the mechanism of abnormal vascular functions, such as tumor angiogenesis, irregular blood flow, and extensive vascular permeability, as well as their influence on the EPR effect. In addition, we analyzed some nanoparticles developed to facilitate the EPR effect in tumors in response to the above factors. In terms of antiangiogenesis, gene therapy nanomedicines targeting angiogenic growth factors and their receptors are the most widely studied, and offer another approach to directly inhibiting tumor angiogenesis early in the process. Its diverse nanocarrier form provides a rich selection for delivery to different types of tumors. For irregular blood flow caused by abnormal vascular morphology and structure, blood perfusion can be effectively upregulated by slight vascular facilitation, vasodilation, or removal of excessive ECM in the TME. Nanoparticles encapsulated with different types of drugs exhibit the diversity and universality of nanocarriers, providing more possibilities for the selection of nanodrugs. However, EPR effect-based drug delivery strategies continue to be characterized by numerous problems and limitations. For example, enhancing the EPR effect may help maintain nutrient and oxygen transport, thereby accelerating tumor growth. Therefore, when designing such nanoparticles, it is particularly important to properly balance the relationship between tumor killing or inhibition and tumor growth promotion caused by the EPR effect. Funding: This work of L.H. was funded by the Carolina Center for Cancer Nanotechnology Excellence, United States (NIH grant CA198999). Y.C. is supported by the Natural Science Foundation of Shanghai (19ZR1472500).
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-03-03T05:24:41.621Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "d6435bf6bd78a6ba0c44e8ac48b9f128a773c0aa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/11/2/124/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6435bf6bd78a6ba0c44e8ac48b9f128a773c0aa",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17234190 | pes2o/s2orc | v3-fos-license | Neonatal Whisker Trimming Impairs Fear/Anxiety-Related Emotional Systems of the Amygdala and Social Behaviors in Adult Mice
Abnormalities in tactile perception, such as sensory defensiveness, are common features in autism spectrum disorder (ASD). While not a diagnostic criterion for ASD, deficits in tactile perception contribute to the observed lack of social communication skills. However, the influence of tactile perception deficits on the development of social behaviors remains uncertain, as do the effects on neuronal circuits related to the emotional regulation of social interactions. In neonatal rodents, whiskers are the most important tactile apparatus, so bilateral whisker trimming is used as a model of early tactile deprivation. To address the influence of tactile deprivation on adult behavior, we performed bilateral whisker trimming in mice for 10 days after birth (BWT10 mice) and examined social behaviors, tactile discrimination, and c-Fos expression, a marker of neural activation, in adults after full whisker regrowth. Adult BWT10 mice exhibited significantly shorter crossable distances in the gap-crossing test than age-matched controls, indicating persistent deficits in whisker-dependent tactile perception. In contrast to controls, BWT10 mice exhibited no preference for the social compartment containing a conspecific in the three-chamber test. Furthermore, the development of amygdala circuitry was severely affected in BWT10 mice. Based on the c-Fos expression pattern, hyperactivity was found in BWT10 amygdala circuits for processing fear/anxiety-related responses to height stress but not in circuits for processing reward stimuli during whisker-dependent cued learning. These results demonstrate that neonatal whisker trimming and concomitant whisker-dependent tactile discrimination impairment severely disturbs the development of amygdala-dependent emotional regulation.
Introduction
Sensory defensiveness is a negative reaction to one or more types of sensation and is often associated with neurodevelopmental disorders such as autism spectrum disorder (ASD) and fragile X syndrome [1]. Tactile hyporesponsiveness is strongly associated with social and communication impairments in ASD patients [2]. Emotional memories associated with tactile perception are also important for attachment in infancy, defined in rodent studies as seeking proximity to and maintaining contact with the dam when pups are upset or threatened [3,4]. This attachment is an early primitive social behavior; therefore, early tactile sensory defensiveness is likely to influence the development of neural circuits related to emotional and social behaviors, but this remains to be determined.
Whiskers are one of the most highly developed tactile organs in mice and serve as an important communication tool during neonatal development [5,6]. Whisker trimming is frequently observed in laboratory mice associated with social hierarchy as socially dominant mice trim the whiskers of inferiors [5,7]. In neonatal rats, the tactile information from whiskers (together with olfactory cues) is necessary to receive milk from the dam [8] and communicate with their siblings [9]. Tactile perception from whiskers, particularly during the neonatal period, might thus be critical for the development of social behaviors in rodents.
Whisker-specific tactile perception modules in the somatosensory cortex of rodents are organized as "barrel cortex" [10]. The barrel cortices receive input from thalamic afferents that specify the typical barrel pattern in cortical cyto-architecture and function within the first few postnatal days [11][12][13][14]. The critical period is postnatal days 10-14 (P10-14) in rodents for the functional maturation of neurons in the somatosensory area related to whiskers (the barrel field, S1BF) [15,16]. Long-lasting changes in whisker receptive fields are produced by tactile deprivation during the early postnatal period [17][18][19][20]. Specifically, whisker trimming in neonates leads to hyper-responsiveness of cortical barrel neurons due to wide-ranging and permanent abnormalities in the local inhibitory circuitry, even after whiskers fully regrow [17]. Whisker trimming during the neonatal period also impairs whisker-dependent discrimination and associated behaviors in adulthood [21][22][23]. Rats subjected to whisker trimming during P0-P3 showed shorter crossable distances in the gap-crossing test, higher exploratory activity, and increased social interaction times, even after full whisker regrowth by the time of testing [22]. In addition, whisker trimming of rats during P9-P20 decreased emotional reactivity, such as freezing duration and flight reaction, at P25 [24]. These findings suggest that early postnatal tactile experience is critical for the anatomical and functional maturation of the related somatosensory system [25,26] and might also affect the formation of emotional systems related to social behavior. However, the variations in whisker trimming duration, the developmental stage at which whisker trimming was performed, and the time point of behavioral assessment across these studies [22,24] make it difficult to establish a precise association between early tactile deficits and sustained changes in social behavior. In the present study, mouse whiskers were bilaterally trimmed for 10 days after birth (BWT10 mice) to disturb the critical period of S1BF maturation (including thalamocortical afferents, barrel formation, and cortico-cortical projections [14,16]). Adult BWT10 mice were tested for learning of whisker-dependent discrimination tasks and social behaviors. Following stress and behavior testing, mice were also examined for c-Fos expression in the amygdala as a marker of neural activity because amygdala circuits are critical for processing emotional information.
Materials and Methods Animals
Pregnant ddY mice (embryonic day 17-18; E17-E18) were purchased from Japan SLC (Shizuoka, Japan) and monitored at 24-h intervals to establish time of birth. The day of birth is defined as postnatal day (P) 0.
Sensory deprivation
In the trimming group (BWT3 and BWT10), pups were restrained by hand and all whiskers were trimmed daily to within 1 mm of the skin for 3 and 10 days, respectively, beginning at P1. In the control group, pups were restrained by hand but whiskers were left untrimmed. The pups were allowed to mature without further intervention except for weekly cage cleaning. The female pups were sacrificed by pentobarbital overdose (100 mg/kg) at 3 weeks (P3W), and only age-matched P8W-P9W males were used in experiments. All whiskers of both BWT3 and BWT10 mice regrew and were of the equivalent length to controls by P8W-P9W.
Behavioral tests
Gap-crossing test. The apparatus consist of two custom-built black Plexiglas platforms (6 cm wide × 15 cm long × 20 cm high) connected by two identical 2-cm diameter pipes that together form a runway with a manually adjustable gap distance. Two 5 × 5 cm walls were attached to both sides of the platform upper surfaces at the facing ends for the mice to easily recognize the gap distance. Gap-crossing procedures were conducted as follows. The mice were food deprived for 24 h before testing. To prevent the use of visual information, the tests were conducted in a darkened room. The experimenter kept a red-filtered flashlight on hand, but it was not shined on the mice, so the ambient light exposure was less than 1 Lux. The mice were initially trained to find a reward (a small food pellet) at the opposite end of the connected runway. After this training phase, a gap was inserted in the runway and was widened in 0.5-cm intervals. If the mice crossed the gap to obtain the food reward within 2 min, it was considered a successful trial. The gap was widened until the mice would no longer step across it within the 2-min period. This protocol was repeated two times in succession to obtain an average maximum gap width that the individual mouse would cross.
Three-chamber social interaction test. The testing apparatus consisted of a 30 cm wide × 60 cm long × 20 cm high Plexiglas box divided into three chambers as described previously [27,28]. The mice could move between the chambers through a small opening (6 × 6 cm) in each chamber divider. Plexiglas restraining cylinders were placed in each of the two side chambers, one of which contained a probe mouse. Numerous holes in the cylinders enabled contact between the test and probe mice. Mice to be tested were placed in the center chamber and allowed 5 min to explore the entire box, after which an unfamiliar, same-sex probe was placed in one of the two restraining cylinders. Test mouse movements were recorded using a video camera positioned above the Plexiglas box. The time spent in the social and opposite (empty) chamber was measured.
Social dominance tube test. Social dominance between control and BWT10 mice was measured by the tube test as previously described [29]. Briefly, the apparatus was a transparent Plexiglas tube 30 cm in length with a 3-cm inner diameter. The tube could be separated into sections by two removal gates 13 cm from each end. The diameter was sufficient to permit only one mouse to walk through without reversing direction. Prior to the test trial, each mouse was released at either end of the tube without the gates down for 2 min. After this 2-min habituation period, two unfamiliar mice of approximately the same age (P8W-P9W) and matched as closely as possible for body size and weight (37-40 g), one control and one BWT10, were simultaneously released at the opposite ends of the tube. The dominant mouse was considered the one that advanced across the midline or the one that pushed the other mouse out of the other end within 2 min. Each mouse was matched with two different opponents with an at least 5-min interval between trials.
Eight-arm radial maze. The custom-built apparatus consisted of a center platform with eight radiating arms, each 5 cm wide × 25 cm long × 15 cm high, numbered 1 to 8. The arm floors were made of black Plexiglas and surrounded by clear 6-mm thick Plexiglas walls. First, a food deprivation schedule was administered to reduce body weight to 85% of baseline. For this purpose, feeding was restricted to 2 h per day for 2 consecutive days. One day before the actual training began, groups of four mice were habituated to the apparatus by placing them at the center and allowing free exploration for 5 min both with and without a bait placed at the arm ends. In the baited condition, the mice were allowed to retrieve the bait, a single 3-g food pellet placed in a food cup. Following habituation and shaping, each animal was individually placed in the center of the maze and trained once a day for 12 consecutive days. The inner wall surfaces of four arms (numbers 1, 3, 5, and 7) were covered with a wire net (1.4 cm mesh; 21 cm long × 7 cm high), and the food cups in these four arms were baited with a single 10-mg food pellet per cup for each daily training trial, while an empty food cup was placed at the ends of the other four arms without wire nets (numbers 2, 4, 6 and 8). Each mouse was allowed to freely explore until it had taken all the pellets or 5 min had elapsed. Measures were made of the ratio of entries into the net-covered/baited arms to total arm entries (ratio of net arm choice) and number of arm revisits. At 2 h after the task, the brains of the mice were processed for immunocytochemistry.
Stress load procedure
The behavioral stress protocol was based on a previous report [30]. Briefly, the mice were individually placed on an elevated circular platform (6 cm diameter × 25 cm high) for 30 min; the mice showed freezing, defecation, and urination under this stress condition. At 2 h after stress, the mice were killed and tissues were prepared for histochemistry.
Tissue preparation
Brains were processed as described previously [28]. Briefly, brains were removed after intracardial perfusion with 4% paraformaldehyde in phosphate-buffered saline (PBS) and post-fixed in the same fixative overnight. The brain tissues were immersed in PBS containing 20% (w/v) sucrose for cryoprotection and then frozen in an embedding compound (Sakura Finetechnical, Tokyo, Japan). Coronal serial sections of 30-μm thickness were prepared on a cryostat (Leica, Germany, model CM 1800), stained as described below, and mounted on gelatin-coated slides (Matsunami, Osaka, Japan).
Immunostaining and Nissl staining
Immunohistochemical analyses were performed using previously described procedures with a slight modification [31]. In brief, free-floating serial coronal sections (30 μm) were collected in PBS. For c-Fos immunohistochemistry, every third section was processed as follows. Sections were incubated overnight in 0.1 M Tris-HCl (pH 7.4) containing 0.3% H 2 O 2 and 0.3% Triton X-100, washed three times with Tris-buffered saline (TBS), and blocked for 30 min with 2% (w/v) Block Ace (Dainippon Sumitomo Pharma Co. Ltd, Osaka, Japan) dissolved in TBS. Blocked sections were incubated overnight at 4°C with anti-c-Fos (1:3000; Santa Cruz Biotechnology, Inc., Santa Cruz, CA). After three washes in TBS, the sections were incubated for 3 h with biotinylated anti-rabbit goat IgG (1:1000; Vector Laboratories, Burlingame, CA), washed again in TBS, and reacted with ABC solution (Elite ABC, Vector Laboratories) at 4°C overnight. After three washes in TBS, the sections were incubated in 0.1 M acetate buffer (pH 6.0) containing 0.05% 3,3 0 -diaminobenzidine tetrahydrochloride solution (Dojindo Laboratories, Kumamoto, Japan), 2.5% ammonium nickel sulfate (Nacalai Tesque, Kyoto, Japan), 0.2% β-Dglucose (MP Biomedicals, Santa Ana, CA), 0.04% ammonium chloride (Wako Pure Chemical Industries, Ltd., Osaka, Japan), and 0.0005% glucose oxidase (Toyobo, Osaka, Japan). After washing, the sections were mounted on gelatin-coated slides, cleared with xylene, and coverslipped using Eukit (O. Kindler, Freiburg, Germany). The adjacent series of sections was used for Nissl staining with 0.1% thionin, dehydrated in an ascending ethanol series, cleared in xylene, and coverslipped with Eukit. For estimating the number of c-Fos positive cells, stained cells were counted in each brain region and cortical layer specified using the criteria described below.
Quantitative studies on c-Fos-positive cells Three to five sections from each stained series were chosen for the quantification of c-Fos-positive neuron number in the prefrontal areas and amygdala, and five sections were chosen for the quantification of stained c-Fos-positive neurons in S1BF. The sections of choice were those closest to dorsal-ventral level interaural 6.02 mm (for prefrontal area), 2.22 mm (for amygdala), and 2.86 mm (for S1BF) according to the atlas of Paxinos and Franklin [32]. Only the left hemisphere from these sections was quantified. For each brain section, the number of c-Fos-immunopositive cells in a given brain structure was counted, divided by the area occupied by that structure (in mm 2 ), and expressed as positive-cell density. For cortical areas, the entire depth of the cortical field was included in a particular section. The borders of the cortical areas and subcortical nuclei were determined using adjacent Nissl-stained sections. These borders were drawn by an investigator blind to the experimental group assignment of the animals and reviewed by a second investigator. The area was measured by ImageJ (National Institutes of Health; NIH).
Statistical analyses
Data are presented as the mean ± standard error of the mean (S.E). Group means from behavioral data were compared by Student's t-test. Paired histological data were also compared by Student's t-test. Multiple group means were compared by one-way analysis of variance (ANOVA) with Tukey's post hoc analysis or two-way ANOVA with Bonferroni's post hoc analysis. The social interaction test (Figs 1B, S1 and S2A) was analyzed by Wilcoxon's test. The social dominancy in the tube test (Figs 1C and S2B) were analyzed using the χ 2 test.
Neonatal whisker trimming resulted in adult social behavior deficits
To examine the effect of neonatal whisker trimming (P1-P10) on adult whisker function, we compared the control mice to the BWT10 mice in the gap-crossing test, which requires the mice to evaluate a gap and make a decision (to cross or not) based on tactile information from the whiskers. The gap-crossing test was performed in the dark so that the mouse relied only on whisker-dependent tactile information to locate a target platform across a gap. The mean maximum distance of the gap that the control mice successful crossed was 7.0 cm (Fig 1A) compared to less than 2.0 cm for the adult BWT10 mice (Fig 1A). This result indicates that neonatal whisker trimming impaired tactile perceptivity of the adult BWT10 mice to detect the target platform over a longer gap distance even though all whiskers had regrown to same length as that of the controls by the time of testing. Neonatal whisker trimming disrupted tactile sensory performance and social interactions of mice in adulthood. A, Adult BWT10 mice showed a significant deficit in gap-crossing performance compared to controls as evidenced by a shorter mean maximum crossable distance. Values are expressed as the mean ± SE. ***p < 0.001 versus control, Student's t-test; n = 5 for control mice, and n = 6 for BWT10 mice. B, In the social interaction test, control mice showed a strong preference for the social side chamber containing a probe mouse, whereas adult BWT10 mice showed no Maternal-infant separation during early postnatal development is a highly stressful situation for mice that has long-term influences on adult social behaviors [33][34][35][36][37]. Because tactile contact plays important roles in maternal-infant interactions, we examined the effects of whisker trimming at birth on several social behaviors in the adult mice. In the three-chamber social interaction test, the control mice spent significantly more time in the "social" side chamber containing a conspecific than in the "nonsocial" empty side chamber, whereas the adult BWT10 mice showed no preference for the social chamber ( Fig 1B). Next, we investigated social dominance between the adult control and BWT10 mice. In the tube test, the more dominant mouse in a social hierarchy shows greater aggression and forces its opponent out of a tube when both are placed at opposite ends and must then use the other end for escape [29]. The adult BWT10 mice won significantly more head-to-head confrontations (20/30, 66.7%) against the control mice than expected by chance (χ2 = 5.933 p = 0.015) (Fig 1C). Therefore, whisker trimming at birth altered social behavior as well as tactile perception in adulthood, even though whiskers were fully regrown by the time of the tests.
Neonatal whisker trimming altered stress-induced c-Fos expression pattern in the amygdala and frontal cortex
Emotional regulation has important implications for social behavior as well as social contacts dependent on multimodal perception, including whisker tactile perception [5,38]. To evaluate if neonatal whisker trimming affects the development of the emotional system, we examined stress-induced neural activation in several brain regions associated with emotional processing, the basolateral amygdala, paraventricular nucleus (PVN), and prefrontal cortex [medial orbital (MO), ventral orbital (VO), and prelimbic cortex (PrL)] following exposure to elevated platform stress. In the control mice, c-Fos-positive cells were significantly increased 2 h after stress in PVN and prefrontal cortex and there was a trend for increased expression in the amygdala (Figs 2 and 3). However, stress-induced c-Fos expression in the amygdala and PVN of the adult BWT10 mice was significantly higher than of the control mice, whereas expression in the prefrontal cortex of the BWT10 mice was not altered by the stress (Figs 2 and 3). These data suggest that the adult BWT10 mice show aberrant stress-induced neuronal hyperactivity within the emotional system, including amygdala and PVN.
Neonatal whisker trimming did not affect whisker-cued memory or related reward processing
The amygdala plays important roles not only in emotional regulation but also in reward-contingent behavior. Indeed, reward-driven neuronal activity in the amygdala is known to modulate memory formation during radial maze appetitive training in mice [39][40][41]. Thus, to clarify whether neonatal whisker trimming affects reward-motivated whisker tactile perception and memory, we analyzed daily performances of the adult control and BWT10 mice during the learning of an eight-arm radial maze task under conditions requiring the detection of whisker cues for reward. Four arms of the maze were cued with wire nets and baited at the ends, while the other four arms contain no tactile cues and were never baited. In this apparatus, the mice had to learn and memorize the relationship between tactile cue and reward and the spatial such preference between the social and empty chambers. All values are expressed as the mean ± SE. **p < 0.01, n.s., no significance versus social chamber, Wilcoxon's test; n = 23 for control mice, and n = 22 for BWT10 mice. C, In most trials, control mice retreated ("lost") when BWT10 and control mice faced each other in the tube test apparatus. Values are expressed as the percentage of wins. *p < 0.05, significantly different from chance 50:50 outcome, χ test; n = 15 for control mice, and n = 15 for BWT10 mice. relationship among cued/baited and uncued/unbaited arms. Both groups showed selective entering into the net-covered arms across trials (one trial per day for 12 consecutive days). The ratio of net arm choice also increased over training days and plateaued at the same level by day 10 in both groups (Fig 4), indicating that the BWT10 mice could learn and memorize the association between the tactile cue and reward (baited arms) as efficiently as the control mice. Thus, neonatal whisker trimming did not impair tactile perception based reward-driven memory formation.
Next, to address whether neonatal whisker trimming affects neuronal activity in structures engaged by this task, we examined c-Fos expression in S1BF, the prefrontal cortex, and the amygdala 2 h after the 12 th training trial. The density of c-Fos-positive cells significantly increased in S1BF layer IV, amygdala, and prefrontal cortex of both groups after training. However, the increase and final density in S1BF layer IV were significantly smaller in the BWT10 mice than in the control mice (551.3 ± 66.4 vs. 1175.8 ± 127.2, respectively; Fig 5A), while increases were comparable between the groups in the amygdala ( Fig 5B) and prefrontal cortex (S1 Table). Nissl staining revealed that the total number of neurons in the regions examined was comparable in both the groups (S2 Table). Taken together with the results from the gapcrossing test, these results indicate that neonatal whisker trimming impairs the development of circuits for processing whisker tactile discrimination in the somatosensory system but not neuronal circuits for reward processing in the amygdala. Both systems were activated during learning of the whisker-cued maze task, but as discussed below, neuronal function in the S1BF would likely be less important for performance than in the gap-crossing task.
Discussion
In this study, we found that whisker trimming for 10 days after birth caused long-lasting dysfunction of whisker-dependent tactile perception as revealed by the gap-crossing test (Fig 1A), as well as abnormalities in social-related behaviors such as social interaction and social dominance ( Fig 1B). Furthermore, neonatal whisker trimming severely affected the development of Neonatal whisker trimming did not affect learning in a whisker-cued memory task. The net arm choice among the first four entries (A) and the ratio of net arm choice (B) increased with daily trials in both control and BWT10 mice. No significant differences were observed in these parameters between control and BWT10 mice (two-way ANOVA with Bonferroni's test). n = 9 for control mice, and n = 8 for BWT10 mice. Neonatal whisker trimming altered c-Fos expression in S1BF but not in the amygdala of adult mice following learning of whisker-cued memory task. A, Distribution of c-Fos-expressing cells in the somatosensory cortex 2 h after the net-guided radial maze task. The graphs show the density of c-Fos-positive cells in S1BF. The density of c-Fos-positive cells in S1BF layer IV of BWT10 mice was significantly lower than in control mice. Scale bar, 100 μm. B, There was no difference in c-Fos-positive cell density in the amygdala between control and BWT10 mice. Values are amygdala circuitry related to fear/anxiety processing as shown by altered c-Fos expression patterns following the height stress compared with that in controls (Fig 2). In contrast, whisker trimming did not alter amygdala circuits related to reward processing as revealed by unchanged c-Fos expression patterns compared with that in controls following whisker-dependent cued training in the radial maze task (Fig 3). These results indicated that the neonatal suppression of tactile perception and experience due to whisker trimming impair the development of emotional systems, leading to long-lasting changes in social behavior.
To what extent did neonatal whisker trimming affect the development of whisker-dependent tactile perception and cognitive systems? In the gap-crossing task, the mean maximum gap distance was only 2.0 cm for the BWT10 mice. For successful gap crossing, mice must decide whether they are able to cross the gap based of whisker information; however, at such short distances, the mice can find the target platform by touching it with their nose as well as with their whiskers [42]. Thus, BWT10 mice may have severe difficulties perceiving the gap distance and/or the shape of the target platform using their whiskers. On the other hand, BWT10 mice could learn the radial maze tactile-cued task with their whiskers, whereas normal adult mice failed to learn this maze task when all whiskers were trimmed prior to the first daily trial and once every three trials thereafter (Soumiya et al., unpublished data). Furthermore, S1BF neuronal activity as estimated by the number of c-Fos-positive cells was significantly upregulated after the 12 daily trials in BWT10 mice but to a significantly lesser degree than that in the control mice. These results suggest that limited sensory processing is sufficient for BWT10 mice to learn the net-guided radial maze task but not the gap-crossing test. Thus, the most plausible explanation is that BWT10 mice lose higher-order sensory and/or sensorymotor integration for whisker-dependent tactile perception. Indeed, it has been suggested that the integration of sensory and motor information is required for learning the gap-crossing test because the information for performance is derived from individual whiskers moving and touching objects synchronically or independently [43][44][45].
Whisker-dependent tactile perception is also important for the social behavior of mice. Adult mice that had their whiskers trimmed immediately prior to testing exhibited reduced aggressive social behaviors against strangers or intruders [6,46]. Similarly, adult mice with whiskers plucked prior to the test showed no preference for the social chamber in the threechamber social interaction test (S1 Fig). In the case of mice subjected to whisker trimming as neonates, preference for the social chamber was maintained only in those subjected to bilateral whisker trimming for just 3 days after birth (BWT3) (S2A Fig), whereas the BWT10 mice showed no preference for the social chamber, although their whiskers had fully regrown by the time of testing. Moreover, the BWT10 mice showed social dominancy against controls, while the BWT3 mice did not (Figs 1 and S2B). Similar observations were previously reported in laboratory rats, subjected to whisker trimming during P0-P3 [22]. The difference between the BWT10 and BWT3 mice could result from the duration of sensory deprivation and concomitant effects on neural circuit development. Although there may be multiple causes underlying these abnormalities in BWT10 mouse social behavior, we suggest that impaired neonatal tactile experience and social interaction induces stress that may disrupt the development of emotional systems. This hypothesis is strongly supported by the greater neuronal activation in the amygdala/PVN induced by height stress in the BWT10 mice than in the control mice (Fig 2). Furthermore, social isolation from the dam in the early postnatal period causes emotional expressed as the mean ± SE. ***p < 0.001, One-way ANOVA and Tukey's post hoc test; n = 5 for control mice, and BWT10 mice.
Amygdala circuits play key roles in processing different information related to fear/anxiety and rewarding/aversive outcomes, which in turn modulate sensory perception, memory formation, and social behavior [47]. Each neuronal system within the amygdala comprises distinct neuronal subtypes within amygdala and could activate individually. Indeed, some amygdala neurons excited by aversive cue never respond to a reward cue during associative learning in the rat amygdala [48]. Comparing to the relative control, neurons in the amygdala of the BWT10 mice were found to be hyper-reactive against the height-stress but respond normally for the reward processing in the radial maze task (Figs 2B and 5B). These data indicated that the fear/anxiety-related circuit would be more vulnerability against neonatal whisker trimming than the reward relation in the amygdala.
Tactile defensiveness, defined as extreme sensitivity or aversive responsiveness to touch that would be benign to most people (e.g., light touch or clothing texture) is a common feature of neurodevelopmental disorders such as ASD and fragile X syndrome [49][50][51]. The tactile perception system is the earliest to develop among sensory systems. During infancy and early childhood, tactile perception provides important information from the outside world and an opportunity for environmental interactions, particularly with the mother [1]. Therefore, genetic and environmental causes of tactile defensiveness and not only neonatal whisker trimming are likely to impair attachment formation, an early primitive social behavior. Further studies are necessary to elucidate the molecular mechanisms underlying the abnormal development of sensory and emotional circuits caused by neonatal whisker trimming; however, we believe that such studies will have important implications for understanding the pathogenesis of neurodevelopmental disorders.
Supporting Information S1 Fig. Acute whisker removal altered the social behavior of adult mice. All whiskers of P8W ddY male mice were plucked using a pair of tweezers under pentobarbital anesthesia (50 mg/kg) one day before the three-chamber social interaction test. The control mice received only pentobarbital anesthesia. Whisker-plucked mice did not show a preference for the social side chamber containing a probe mouse. All values are expressed as the mean ± SE. ÃÃÃ p < 0.001, Wilcoxon's test; n = 6, control mice; n = 6, test mice.
(EPS) S2 Fig. Neonatal whisker trimming for 3 days after birth did not alter social behavior in adulthood. A, Similar to age-matched control mice, adult BWT3 mice showed a strong preference for the social side chamber containing a probe mouse in the three-chamber social interaction test (Fig 1B). All values are expressed as the mean ± SE. ÃÃÃ p < 0.001 versus social chamber, Wilcoxon's test; n = 27, for control mice; n = 27, for BWT10 mice.. B, In the tube test, BWT3 mice did not show social dominance against control mice. Values are expressed as the percentage of wins. χ 2 = 1.93, p = 0.164, χ 2 test; n = 15 for control mice, and n = 15 for BWT3 mice. (EPS) S1 Table. The number of c-Fos-positive cells in the frontal cortex of mice after the netguided radial maze task. | 2018-04-03T06:07:54.678Z | 2016-06-30T00:00:00.000 | {
"year": 2016,
"sha1": "0da042fcb9a871d0458feac4a7d9778e6d1fc793",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0158583&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0da042fcb9a871d0458feac4a7d9778e6d1fc793",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
212752013 | pes2o/s2orc | v3-fos-license | Chronic Exposure to Fluoride Affects GSH Level and NOX4 Expression in Rat Model of This Element of Neurotoxicity.
Exposure of neural cells to harmful and toxic factors promotes oxidative stress, resulting in disorders of metabolism, cell differentiation, and maturation. The study examined the brains of rats pre- and postnatally exposed to sodium fluoride (NaF 50 mg/L) and activity of NADPH oxidase 4 (NOX4), catalase (CAT), superoxide dismutase (SOD), glutathione peroxidase (GPx), glutathione reductase (GR), concentration of glutathione (GSH), and total antioxidant capacity (TAC) in the cerebellum, prefrontal cortex, hippocampus, and striatum were measured. Additionally, NOX4 expression was determined by qRT-PCR. Rats exposed to fluorides (F-) showed an increase in NOX4 activity in the cerebellum and hippocampus, a decrease in its activity in the prefrontal cortex and hippocampus, and upregulation of NOX4 expression in hippocampus and its downregulation in other brain structures. Analysis also showed significant changes in the activity of all antioxidant enzymes and a decrease in TAC in brain structures. NOX4 induction and decreased antioxidant activity in central nervous system (CNS) cells may be central mechanisms of fluoride neurotoxicity. NOX4 contributes to blood-brain barrier damage, microglial activation, and neuronal loss, leading to impairment of brain function. Fluoride-induced oxidative stress involves increased reactive oxygen speciaes (ROS) production, which in turn increases the expression of genes encoding pro-inflammatory cytokines.
Introduction
The accumulation of fluoride (F-) in the body is particularly harmful to the central nervous system (CNS) of both humans and animals, leading to learning disabilities, memory and cognitive function impairment, and behavioral disorders in both young and adult individuals [1]. However, weaker protective mechanisms and enhanced blood-brain barrier (BBB) permeability make young individuals particularly vulnerable to F-damaging effects [2][3][4]. For example, in areas where fluoride concentration in drinking water significantly exceeds WHO standards, children are reported to have significantly lower (intelligence quotient) IQ scores compared to children living in uncontaminated areas [5][6][7]. The exposure of this element during pregnancy, development, and thereafter can adversely affect the brain functions of offspring [8]. A number of histopathological changes, including demyelinization, a decrease in the number of Purkinje cells, thickening and loss of dendrites, swelling of mitochondria, and dilation of endoplasmic reticulum in neurons, have been observed in the brains of experimental animals subjected to this element [9]. Alterations in the density of neurons and in the number of undifferentiated neurons have also been observed in the brains of fetuses aborted therapeutically in a geographic region characterized by endemic fluorosis [10]. Moreover, a decrease in the number of neuronal nicotinic acetylcholine receptor (nAChR) binding sites and a selective decrease in the levels of the receptor subunit proteins in PC12 cells subject to fluoride toxicity were noted. Because nAChRs play major roles in cognitive function, including learning and memory, as well as exerting a neuroprotective effect, such decreases in the number of these receptors may be an important factor in connection with the dysfunction of the central nervous system caused by F-toxicity [11].
Another important mechanism of fluoride-induced CNS impairment is oxidative stress caused by increased synthesis of reactive oxygen species (ROS), weakening of antioxidant defense mechanisms, and induction of lipid and protein oxidation [1,8,[12][13][14][15][16]. ROS are highly reactive oxygen derivatives, including the superoxide radical (O 2 − ), hydroxyl radical (OH), hydroxyl anion (OH − ), diatomic oxygen (O 2 ), and hydrogen peroxide (H 2 O 2 ) [17]. Free radicals in cells are mainly generated by redox reactions catalyzed by NADPH oxidase (NOX), xanthine oxidase (XO), flavin oxidase, cytochrome P450, and by respiratory chain components in mitochondria [18]. Intracellular defense mechanisms against elevated concentrations of ROS mainly involve the action of antioxidant enzymes, including superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), and glutathione reductase (GR) [17,18]. The CNS is characterized by a high prevalence of oxygen-dependent processes, while simultaneously containing high levels of readily oxidized fatty acids and relatively low activity of antioxidant enzymes [8,19]. This creates conditions wherein exposure of neural cells to harmful factors may easily lead to the initiation of oxidative stress-an imbalance between ROS synthesis and antioxidant enzyme activity [15,20]. Under physiological conditions, ROS act as signaling molecules in the regulation of numerous processes such as gene expression, cell proliferation, cell viability, apoptosis, and immune response to external factors [17]. An increase in ROS synthesis may, therefore, influence the rate of consumption of substrates or cofactors necessary for the proper functioning of the antioxidant enzymes responsible for their removal, thereby inducing oxidative stress and leading to disturbances in cell metabolism and damage to the entire brain [13,17,[21][22][23].
Given the aforementioned evidence of a role for fluoride-induced oxidative stress in CNS impairment, this study aims to further elucidate the underlying mechanisms by analyzing the effect of pre-and postnatally administered low doses of F-on the activity of enzymes responsible for free radical processes in the cerebellum, prefrontal cortex, hippocampus, and striatum of rats.
Animal Procedures
This study was performed on brain tissues from rats exposed pre-and postnatally to sodium fluoride (NaF). Animal procedures were carried out in strict accordance with international standards of animal care, and every effort was made to minimize suffering and the number of used animals. Experiments were approved by the Local Ethical Committee on Animal Testing in Szczecin, Poland (approval No. 32/2015). All applicable international, national, and/or institutional guide-lines for the care and use of animals. All animals were given access to food (standard diet) and drinking water ad libitum. The cages were kept in a controlled temperature environment on a 12-h/12-h light/dark schedule.
Pregnant females Wistar rats were randomly divided into two groups-control and fluoride. Animals from the control group (n = 6) received tap water to drink, while animals from the experimental group (fluoride, n = 6) received drinking water containing NaF in concentration 50 mg/L from pregnancy day 0 to postnatal day 90 (PND 90). Pups were separated from their mothers at PND 21 (end of breast-feeding) and were kept under the same conditions as previously described until reaching maturity (PND 90) [24]. All animals were sacrificed by decapitation; brain structures were dissected and placed in liquid nitrogen. Samples were stored at −80 • C for later analysis.
Oral administration of NaF has been chosen in this experimental design as it reflects human environmental exposure. Rats consume between 30 to 50 mL of water daily, which when given 50 mg/L, gives an intake of 1.5 mg up to 2.5 mg of F − throughout the day. The consumption norms of Faccording to the Polish standards SAI (safe and adequate daily intake) as well as ADI (acceptable daily intake) amount to 3-4 mg/day for an adult (depending on gender). Environmental studies have shown that the symptoms of fluorosis in an adult human weighing 70 kg appear with a consumption of more than 10 mg of F − per day [24].
Measurement of NOX4 Concentration
Analysis of NOX4 concentration was performed by ELISA using Rat NADPH Oxidase 4 ELISA Kit (EIAab, Wuhan, China). The material was prepared and tested according to the manufacturer's recommendations. Measurement was performed on the ASYS UVM 340 spectrophotometer (Biogenet, Parkingowa, Poland).
Analysis of NOX4 Gene Expression by qRT-PCR
Analysis of NOX4 expression was performed by quantitative Reverse Transcription Polimerase Chain Reaction (qRT-PCR). Following dissection, brain tissues were immediately placed in the RNAlater buffer (Qiagen, Wrocław, Poland) to inhibit RNA degradation. RNA was extracted from tissue samples using an RNeasy Lipid Tissue Mini Kit (Qiagen, Poland), according to the manufacturer's instructions. Then, 1 µg of extracted RNA was prepared for analysis using a FirstStrand cDNA synthesis kit and oligo-dT primers (ThermoFisher, Warszawa, Poland). To quantify mRNA levels, qRT-PCR was performed using an ABI 7500Fast and Power Master SYBR Green PCR Master Mix (ThermoFisher, Poland). The following primer pairs were used: GAPDH forward: ATGACTCTACCCACGGCAAG, reverse: CTGGAAGATGGTGATGGGTT; NOX4 forward: AGTCAAACAGATGGGA, reverse: TGTCCCATATGAGTTGTT.
Measurement of Antioxidative Enzyme Activity and GSH Concentration
The activities of SOD, CAT, GPx, GR, as well as total GSH levels in rat brain structures, were measured using assay kits from Cayman Chemical Company (Biokom, Janki, Poland): Superoxide Dismutase Assay Kit, Catalase Assay Kit, Glutathione Peroxidase Assay Kit, Glutathione Reductase Assay Kit, and Glutathione Assay Kit. All procedures were carried out in accordance with the manufacturer's recommendations. Measurement was performed with the ASYS UVM 340 (Biogenet, Poland) spectrophotometer [25][26][27][28][29].
Measurement of TAC
TAC was measured using the Antioxidant Assay Kit (Cayman Chemical Company, Biokom, Poland) and the ASYS UVM 340 spectrophotometer (Biogenet, Poland). This kit measures the content of both water-soluble and lipid-soluble antioxidants; the result obtained includes antioxidant enzymes as well as vitamins, lipids, glutathione, uric acid, and other antioxidant molecules in the test sample. The procedures were performed in accordance with the manufacturer's recommendations [30].
Measurement of Protein Concentration
Protein concentration was determined spectrophotometrically using the MicroBCA Protein Assay Kit (ThermoFisher, Poland). Sample preparation was performed in accordance with the manufacturer's recommendations. Measurement was performed with an ASYS UVM 340 spectrophotometer (Biogenet, Poland) and the results were read in MicroWIN (Microwin Technology Solutions Limited, Hong Kong). Protein concentration was calculated based on the obtained standard curve.
Statistical Analysis
Obtained results were analyzed statistically using a Statistica 12.0 package (StatSoft, Dell, Round Rock, TX, USA). For each of the examined parameters, the arithmetic mean ± standard deviation (SD) was calculated. The Shapiro-Wilk (W) test was used to obtain the distribution of results for individual variables. Most of the data differed from the normal distribution; therefore, non-parametric tests were used for further analysis. The Mann-Whitney U-test was used to assess the differences between the control group and the study group. Differences were deemed statistically significant at p ≤ 0.05.
Effects of F-Exposure during Pre-and Postnatal Development on NOX4 Protein Concentration and Gene Expression in Rat Brain Structures
NOX are endothelial proteins responsible for significant ROS production in cells. In order to determine the contribution of isoform 4 NOX to free radical processes in the brain, protein concentration and gene expression in the cerebellum, prefrontal cortex, hippocampus, and striatum of rats from the control group, subjected to pre-and postnatal F-exposure, were measured.
qRT-PCR analysis of NOX4 expression showed no statistically significant changes in NOX4 expression in studied brain structures ( Figure 1B). Non-significant upregulation of the gene expression was observed in the hippocampus, and its downregulation was observed in the prefrontal cortex, cerebellum, and striatum ( Figure 1B). The concentration of NADPH oxidase 4 (NOX4) (A) and its expression (fold change) (B) in studied brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (Ctr; n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 for the significance of difference between the groups (Mann-Whitney test).
Effects of Fluoride on SOD, CAT, GPx, and GR Activity and GSH Concentration in the Rat Brain
Analysis of SOD activity showed that pre-and postnatal fluoride exposure leads to a decrease in its activity only in the cerebellum (−68.2%; p = 0.005) and prefrontal cortex (Figure 2A). In the hippocampus and striatum, measured enzyme activity was lower, but the differences were not statistically significant. The concentration of NADPH oxidase 4 (NOX4) (A) and its expression (fold change) (B) in studied brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (Ctr; n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 for the significance of difference between the groups (Mann-Whitney test).
Effects of Fluoride on SOD, CAT, GPx, and GR Activity and GSH Concentration in the Rat Brain
Analysis of SOD activity showed that pre-and postnatal fluoride exposure leads to a decrease in its activity only in the cerebellum (−68.2%; p = 0.005) and prefrontal cortex (Figure 2A). In the hippocampus and striatum, measured enzyme activity was lower, but the differences were not statistically significant.
Effects of Fluoride on SOD, CAT, GPx, and GR Activity and GSH Concentration in the Rat Brain
Analysis of SOD activity showed that pre-and postnatal fluoride exposure leads to a decrease in its activity only in the cerebellum (−68.2%; p = 0.005) and prefrontal cortex (Figure 2A). In the hippocampus and striatum, measured enzyme activity was lower, but the differences were not statistically significant.
A significant decrease in CAT activity was observed following fluoride exposure in the cerebellum (−54.2%; p = 0.043) ( Figure 2B). No statistically significant differences were observed in other brain structures studied.
The effect of pre-and postnatal exposition to NaF on superoxide dismutase (SOD) (A) and catalase (CAT) (B) activity in different rat brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, for the significance of difference between the groups (Mann-Whitney test).
Analysis of GPx activity showed a statistically significant decrease in the cerebellum (−44.8%; p = 0.003; Figure 3A) and a significant increase in the striatum (+102.7%; p = 0.036) compared to control. No significant differences in activity were found in the other brain structures studied.
Figure 2.
The effect of pre-and postnatal exposition to NaF on superoxide dismutase (SOD) (A) and catalase (CAT) (B) activity in different rat brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, for the significance of difference between the groups (Mann-Whitney test).
Analysis of GPx activity showed a statistically significant decrease in the cerebellum (−44.8%; p = 0.003; Figure 3A) and a significant increase in the striatum (+102.7%; p = 0.036) compared to control. No significant differences in activity were found in the other brain structures studied.
Biomolecules 2020, 10, x FOR PEER REVIEW 6 of 14 Fluoride exposure led to a slight but statistically significant decrease in GR activity (−20.6%; p = 0.008) in the rat cerebellum ( Figure 3B). In the other examined brain structures, an increase in enzyme activity was observed, but this increase was statistically significant only in the hippocampus (+46.1%; p = 0.031) and striatum (+72.2%; p = 0.034).
Analysis also showed a small but statistically significant decrease in GSH concentration in the cerebellum (−36.2%; p = 0.029); no statistically significant differences were found in other brain structures ( Figure 3C). Figure 3. The effect of pre-and postnatal exposition to NaF on glutathione peroxidase (GPx) activity (A), glutathione reductase (GR) activity (B), and glutathione (GSH) concentration (C) in different rat brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (n = 6) and Fexposed group (F) (n = 6). * p ≤ 0.05, ** p ≤ 0.01, for the significance of difference between the groups (Mann-Whitney test).
Reduction in TAC as a Result of Chronic Exposure to Fluoride during pre-and Postnatal Development
Analysis of total antioxidant capacity following perinatal exposure to F-showed a statistically significant decrease in TAC in the prefrontal cortex (−43.7%; p = 0.002), hippocampus (−43.6%; p = 0.0001) and striatum (−73.7%; p = 0.043) (Figure 4). Cerebellar TAC also decreased, but the change was not statistically significant. , and glutathione (GSH) concentration (C) in different rat brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, ** p ≤ 0.01, for the significance of difference between the groups (Mann-Whitney test).
Fluoride exposure led to a slight but statistically significant decrease in GR activity (−20.6%; p = 0.008) in the rat cerebellum ( Figure 3B). In the other examined brain structures, an increase in enzyme activity was observed, but this increase was statistically significant only in the hippocampus (+46.1%; p = 0.031) and striatum (+72.2%; p = 0.034).
Analysis also showed a small but statistically significant decrease in GSH concentration in the cerebellum (−36.2%; p = 0.029); no statistically significant differences were found in other brain structures ( Figure 3C).
Reduction in TAC as a Result of Chronic Exposure to Fluoride during pre-and Postnatal Development
Analysis of total antioxidant capacity following perinatal exposure to F-showed a statistically significant decrease in TAC in the prefrontal cortex (−43.7%; p = 0.002), hippocampus (−43.6%; p = 0.0001) and striatum (−73.7%; p = 0.043) (Figure 4). Cerebellar TAC also decreased, but the change was not statistically significant.
Discussion
The synthesis of ROS, which, under physiological conditions, act as signaling molecules, is regulated in vivo by pro-and antioxidant enzymes and molecules. Disturbances in this balance lead to oxidative stress. The brain is particularly sensitive to oxidative stress due to its high O2 consumption and relatively low levels of antioxidant enzyme activity. Additionally, the brain contains a large number of polyunsaturated fatty acids (including membrane phospholipids, which can easily undergo oxidation induced by free radicals) and high concentrations of Fe 2+ , Cu 2+ , and Zn 2+ ions, especially in substantia nigra and striatum [17,31,32]. Changes in levels of neuronal membrane phospholipids, due to their oxidation and release from the membrane, result in changes in cell membrane fluidity, stability, and permeability [9,33]. Moreover, oxidized membrane lipids can be transformed into biologically active compounds involved in the development of inflammation (e.g., prostanoids, leukotrienes, and lipoxins synthesized in cyclooxygenase and lipoxygenase pathways) [34]. On the other hand, ROS themselves also can induce inflammatory processes by activating NF-κB-dependent transcription of inflammatory factors [35].
Previous in vitro and in vivo studies have confirmed that fluoride accumulation in the brain leads to an increase in ROS concentration, decrease antioxidant enzyme activity, and an increase in lipid peroxidation. Throughout literature, data clearly indicate that exposure to both low and high Fconcentrations promotes the synthesis of ROS and malondialdehyde (MDA, a marker of oxidative stress) in the brain [15,16,35,36]. Most studies also confirm that long-term exposure leads to a decrease in the activity of the antioxidant enzymes responsible for maintaining proper redox status in cells, although this depends on fluoride concentration, exposure period, examined brain structures, and The total antioxidant capacity (TAC) of the tissue, measured in different rat brain structures (prefrontal cortex, cerebellum, hippocampus, and striatum) in control (n = 6) and F-exposed group (F) (n = 6). * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 for the significance of difference between the groups (Mann-Whitney test).
Discussion
The synthesis of ROS, which, under physiological conditions, act as signaling molecules, is regulated in vivo by pro-and antioxidant enzymes and molecules. Disturbances in this balance lead to oxidative stress. The brain is particularly sensitive to oxidative stress due to its high O 2 consumption and relatively low levels of antioxidant enzyme activity. Additionally, the brain contains a large number of polyunsaturated fatty acids (including membrane phospholipids, which can easily undergo oxidation induced by free radicals) and high concentrations of Fe 2+ , Cu 2+ , and Zn 2+ ions, especially in substantia nigra and striatum [17,31,32]. Changes in levels of neuronal membrane phospholipids, due to their oxidation and release from the membrane, result in changes in cell membrane fluidity, stability, and permeability [9,33]. Moreover, oxidized membrane lipids can be transformed into biologically active compounds involved in the development of inflammation (e.g., prostanoids, leukotrienes, and lipoxins synthesized in cyclooxygenase and lipoxygenase pathways) [34]. On the other hand, ROS themselves also can induce inflammatory processes by activating NF-κB-dependent transcription of inflammatory factors [35].
Previous in vitro and in vivo studies have confirmed that fluoride accumulation in the brain leads to an increase in ROS concentration, decrease antioxidant enzyme activity, and an increase in lipid peroxidation. Throughout literature, data clearly indicate that exposure to both low and high F-concentrations promotes the synthesis of ROS and malondialdehyde (MDA, a marker of oxidative stress) in the brain [15,16,35,36]. Most studies also confirm that long-term exposure leads to a decrease in the activity of the antioxidant enzymes responsible for maintaining proper redox status in cells, although this depends on fluoride concentration, exposure period, examined brain structures, and animal age [16,37,38].
F-may indirectly induce inflammation in the brain by promoting oxidative stress, which in turn increases the synthesis of pro-inflammatory molecules. Chronic inflammation in CNS has been implicated in the loss of neurons in neurodegenerative diseases [39].
4.1. The Effect of Perinatal Exposure to Fluoride on the Expression and Activity of NOX4 in the Rat Brain ROS are produced in large quantities in the mitochondria during electron transport chain reactions through mitochondrial membrane complexes. Complexes I, II, and III may "leak" electrons, reducing oxygen molecules to reactive superoxide O 2 − , which are then converted to H 2 O 2 [40].
Another important source of ROS is enzymatic NOX-catalyzed reactions occurring in various cellular compartments [41,42]. The function of NOX varies depending on its location in cells and its level of activity. In the CNS under physiological conditions, ROS produced by these enzymes are mainly responsible for the regulation of inflammatory processes (including microglia activation), cellular signaling, posttranslational modification of proteins, regulation of gene expression, and processes such as apoptosis and neuroplasticity [43,44]. NOX4, an enzyme responsible for the synthesis of H 2 O 2 from molecular oxygen, is, like all NOX isoforms, a membrane-bound enzyme. It occurs in the endoplasmic reticulum, mitochondria, and perinuclear space, but not in the outer cell membrane [45][46][47][48]. In the CNS, NOX4 is expressed in the cerebral cortex, hippocampus, and cerebellum [49]. So far, it has been confirmed that NOX4 is responsible for a significant generation of ROS in the brain and an increase in its activity has been implicated in the development of neurodegenerative diseases, both acute and chronic [49,50].
In this study, we examined the expression level of NOX4 and observed that it was additionally expressed in the striatum, where its level was comparable to other tested brain structures. Analysis of NOX4 protein concentration showed that perinatal exposure to F-lead to increased protein expression in the hippocampus and cerebellum, which may be connected with the induction of H 2 O 2 synthesis in the cerebellum and hippocampus. A significant decrease in NOX4 concentration was observed in the prefrontal cortex and striatum of exposed rats. Pre-and postnatal exposure to fluoride (50 mg/L) lead to the downregulation of NOX4 expression in all brain structures besides the hippocampus, where upregulation was observed. The observed changes in NOX4 expression and protein concentration in the cerebellum indicate a probable feedback mechanism inhibiting NOX4 expression. It is likely that fluoride passing through the blood-brain barrier (BBB) promotes enzymatic activity in the examined brain structures, while the increasing concentration of H 2 O 2 reduces gene expression. This is supported by the observed increase in NOX4 protein concentration and decrease in gene expression in the cerebellum of fluoride-exposed individuals. In the prefrontal cortex and striatum of exposed rats, the observed decrease in protein concentration may indicate a faster mechanism of response to the increasing concentration of H 2 O 2 , resulting in the inhibition of gene expression.
Since H 2 O 2 produced by NOX4 is uncharged, it can freely penetrate biological membranes and act as a signaling molecule. Conversely, it can directly damage cells by oxidation of nucleic acids and lipids, especially at excessive concentrations [49,51]. As mentioned earlier, the function of NOX4 depends on, among other things, its location in the cell. The isoform present in the nucleus and nucleolus is linked to regulating the expression of genes associated with the response to oxidative stress as well as participating in oxidative DNA damage and apoptosis initiation through caspase-3 activation, resulting in loss of neurons [52][53][54]. Therefore, an increase in NOX4 activity in rat brain structures exposed to F-likely influences the regulation of free-radical processes and the induction of proinflammatory cytokine synthesis by initiating changes in gene expression [54]. Casas et al. presented a cell-specific system in which induction of NOX4 in the endothelial cells forming the BBB leads to damage and increased flow of proinflammatory and toxic factors into the brain, while the concurrent increase in enzyme activity in neuronal cells leads to autotoxicity [55]. Our analysis showed a significant increase in NOX4 protein expression in the hippocampus and cerebellum-structures responsible for the consolidation of short-term to long-term memory, numerous cognitive functions, spatial orientation, and motor control. As young individuals are more susceptible to NOX4-mediated damage due to weaker protective mechanisms and enhanced blood-brain barrier (BBB) permeability, exposure to F-seems to be particularly dangerous, as it can lead to permanent damage and disrupted development of the aforementioned essential brain structures.
The Effect of Perinatal Fluoride Exposure on the Activity of Antioxidant Enzymes and GSH Concentration in the Rat Brain
Though ROS can act as signaling molecules and regulate cellular processes, excessive concentrations cause damage to cellular structures, including DNA, lipids, and proteins, and disrupt tissue function. Living organisms have developed protective mechanisms against the harmful effects of ROS utilizing intracellular enzymatic systems, which involve low-molecular weight molecules such as GSH, vitamin C, and coenzyme Q, and macromolecular antioxidant enzymes (i.e., CAT, SOD, GPx, and GR) [56]. These antioxidant enzymes work collaboratively and are essential for the proper functioning of the cell.
The present analysis focuses on the influence of F-exposure during development on the activity of CAT, SOD, GPx, and GR in the rat brain. Our results found a statistically significant decrease in the activity of all the examined enzymes-CAT, SOD, GPx, and GR-in the cerebellum. Conversely, in the prefrontal cortex, a decrease in activity was observed only with SOD. Other examined structures (hippocampus and striatum) only showed a significant increase in GR activity and a slight decrease in SOD activity. Additionally, a decrease in TAC was observed in all brain structures studied, which was statistically significant in all structures except the cerebellum. The inhibition of antioxidant enzymes (SOD, CAT) activity in the brains of mice treated with F-was previously demonstrated by Vani and Reddy [57].
The observed changes in the activity of antioxidant enzymes may indicate an intensification of ROS synthesis in the examined brain structures. Increased activity of GR in the hippocampus and striatum, indicates increased activation of mechanisms for the removal of harmful radicals. However, the decrease in SOD activity in all examined structures and in CAT, GPx, and GR activity in the cerebellum, may indicate the consumption of substrates or cofactors necessary for these enzymes' action due to the increasing synthesis of ROS and subsequent alterations in cell signaling. Changes in enzyme activity may also be related to the formation of insoluble complexes of F-with cations in the active sites of these enzymes, leading to inhibition of their activity [58]. Additionally, fluoride may associate with functional groups of amino acids surrounding the enzyme active site, changing its conformation and inhibiting its action [59,60]. Our findings in the cerebellum indicate that the decrease in GPx and GR activity is most likely due to the decrease in the concentration of GSH, an essential cofactor for the action of these enzymes. It is likely that F-disturbs the activity of enzymes involved in the synthesis of GSH, leading to a decrease in its concentration and in the activity of GPx and GR. This study also found that the activity of individual enzymes differed between brain areas and that these changes were not unidirectional within a given area. However, TAC analysis confirmed a universal decrease in antioxidant activity following long-term exposure to F-.
Numerous studies have shown that oxidative stress contributes to the development of neurodegenerative and demyelination diseases, including Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), and multiple sclerosis (MS) [19]. In agreement with our results, both in animal models and patients, observed changes in individual antioxidant enzyme activities are not unidirectional. In patients with AD, researchers report an increase in SOD activity in the prefrontal cortex and an increase in enzyme activity in the hippocampus and caudate nucleus. However, Murakami et al., using a mouse model of AD, showed that the induction of inflammation, phosphorylation of Tau protein, and production of abnormal amyloid was associated with a decrease in SOD activity. It is widely assumed that increased activity of antioxidant enzymes in individual regions of the brain in AD patients represents a mechanism to compensate for increased oxidative stress [61,62].
Like in AD patients, our analysis of perinatal neurotoxicity of F-in rats showed multidirectional changes in the activity antioxidants. Therefore we deduce that, like in AD, increases in the activities of individual enzymes may be caused by an attempt to compensate for enhanced oxidative stress in a given brain structure, while decreases in their activities may be due to a direct inhibitory effect of fluoride on enzyme activity, the unavailability of cofactors necessary for their action, or downregulation through feedback [58,62,63]. Such changes in the activity of antioxidant systems resulting from exposure to Fmay disturb CNS homeostasis and contribute to impaired development of young individuals [64].
Conclusions
The obtained results clearly show that exposure to F-led to an imbalance between ROS synthesis and the activity of antioxidant enzymes in the brain. This is indicated by an increase in the concentration of NOX4 (participating in the synthesis of H 2 O 2 ) and a decrease in TAC in the studied brain structures. Our analysis confirms previous reports that F-inhibits the activity of antioxidant enzymes in the brain not only directly-by binding to elements in the enzyme's active site-but also indirectly by promoting the consumption of cofactors necessary for their action. Cofactor consumption results from continuous induction of ROS synthesis, as indicated by reduced GSH levels and increased NOX4 protein concentration. The role of ROS in the induction of expression of genes encoding pro-inflammatory cytokines makes oxidative stress one of the main factors in promoting pathological inflammatory states.
We have previously demonstrated that long-term exposure of rats to NaF 50 mg/L in drinking water affects lipid metabolism in the liver and brain [24,65]. Using the same experimental model as presented in this study, we found that pre-and postnatal exposure to F-causes irreversible changes in the liver. Morphological changes resembling early phases of steatosis, which indicate the first phase of non-alcoholic fatty liver disease, were observed in rat liver [24]. Using the experimental model described in this study, we have also found that F-lead to changes in lipid metabolism in the brain and the structure especially vulnerable to its action is the hippocampus. Long term exposure to this element cause changes in the activity of enzymes implicated in lipid metabolism, which affect arachidonic acid metabolites-prostaglandin E2 and thromboxane B2, concentration in the brain [65]. The mammalian brain, in comparison to other tissues, is highly enriched in lipids that are vulnerable to oxidative stress. Under pathological conditions when ROS synthesis in the brain is increased, lipid oxidation may trigger local inflammation.
Overall, the oxidative imbalance observed in the studied brain structures, together with previously published findings, show that exposure to fluoride in the period from prenatal development to full sexual maturity can lead to irreversible and detrimental changes in rat brain.
Funding:
The project is financed from the program of the Minister of Science and Higher Education under the name "Regional Initiative of Excellence" in 2019-2022 project number 002/RID/2018/19 amount of financing 12,000,000 PLN.
Conflicts of Interest:
The authors declare they have no actual or potential competing financial interests. | 2020-03-12T10:30:54.645Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "60b3ea20a40f8df34abbc29e1224b14b94207a6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/10/3/422/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac43f74f4bd014665add9fa15480f8460fa4d370",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
52279737 | pes2o/s2orc | v3-fos-license | Gender-specific differences in the awareness and intake of vitamin D among adult population in Qassim region
BACKGROUND: Despite the abundance of sunshine throughout the year, Vitamin D deficiency is highly prevalent among different Saudi populations. The objective of the current study was to evaluate the awareness and intake of Vitamin D and their association among adults of both genders. MATERIALS AND METHODS: A cross-sectional study was done between June and August 2016 among adult patients and their family members (>18 years) presenting at 6 Primary Care Centers in the Qassim region, Saudi Arabia. RESULTS: A total of 500 study participants were included in the study; 54.6% of the participants were males and mostly aged most between 26–50 years. The majority of the participants had heard of Vitamin D (91.4%), believed in its importance for health (92.8%), were aware of the symptoms of Vitamin D deficiency (72.6%), and were able to identify exposure to sunlight (81.4%) and diet (70.4%) as sources. The sources of Vitamin D used by the participants were exposure to the sun (57.2%), Vitamin D-rich foods (51.2%) and supplements (18.8%). There was a significant association between overall awareness of Vitamin D and intake of at least 2 sources of Vitamin D in males (P < 0.001) but not females (P = 0.920). Although females had better awareness than males, exposure to the sun was much lower in females than males. CONCLUSION: As supplementation was very low in both genders, and since cultural factors that limit females' exposure to the sun are not easily modifiable, the current findings further underline the critical importance of Vitamin D supplementation, particularly in females in Saudi Arabia.
What are the implications for research, policy, or practice?
The current findings further highlight the critical importance of vitamin D supplementation, especially among females in Saudi Arabia.
Background
Vitamin D is a fat-soluble vitamin essential for the regulation of calcium and phosphorus that supports cellular processes, bone mineralization and neuromuscular function. 1,2 It is also important for the functioning of several other body systems, including immune, cardiovascular, and reproductive systems. 1,3 Vitamin D deficiency is a global public health problem affecting all regions of the world, specially the Middle East. 4,5 In addition to skeletal and dental problems, vitamin D deficiency has been linked to a long list of diseases including some types of cancer, autoimmune diseases, allergic diseases, inflammatory bowel diseases, cardiovascular diseases, hypertension, diabetes, and several others. 4,6 On the other hand, excess intake of vitamin D (hypervitaminosis D) can cause hypercalcemia and calcium deposition in a number of soft tissues in the body. 7 As very few food items are naturally rich in vitamin D and vitamin D fortified foods are often not adequately consumed, sun exposure remains the most important natural source for vitamin D. 4 milk in the Qassim region is a good source. Despite the abundant sunlight in Saudi Arabia, it was estimated that approximately 80 per cent of different Saudi populations have vitamin D deficiency (defined as 25hydroxyvitamin D <50nmol/l). 8 Additionally, vitamin D deficiency was notably much higher in female than male, probably due some cultural and religious reasons. 9,10 These findings highlight the critical need to raise the public awareness of the problem and its prevention. 11,12 In Saudi Arabia, a number of studies recently examined the awareness of vitamin D and its deficiency among children and adolescents, 13,14 female college students, 15 and hospital patients. Additionally, the awareness of vitamin D supplementation to infants has been examined among primary care physicians and mothers. 16,17 However, none of the previous studies examined the awareness and intake of vitamin D rich sources among adults at primary care setting. Moreover, the association between awareness and intake of vitamin D rich sources has never been the focus of these studies. The objective of the current study was to evaluate the awareness and intake of vitamin D rich sources and their associations among adult males and females attending primary care centres in Qassim region.
Study design
A cross-sectional study was performed between June 2016 and August 2016 among attendants of 6 primary care centres in Qassim region, Saudi Arabia. Ethical approval was obtained from the ethical committee of Qassim College of Medicine, Saudi Arabia.
Population
A total 500 primary care attendants were recruited using convenience sampling while waiting for primary care appointments. Both patients and their family members who agreed to join the study were included. Filling the questionnaire after explaining the objectives of the study was considered approval to join the study. Adult males and females, irrespective of the cause of attending the primary care centre were included. Exclusion criteria included age less than 18 years and having severe mental or sensory problems that affect convenient interactions.
Data collection
Data were collected using a self-answered study questionnaire covering the following sections; demographics, medical history, awareness of vitamin D, and intake of vitamin D. The questionnaire was developed after reviewing similar studies 13,18 and was reviewed by a consultant dermatologist. A pilot study was conducted on 10 volunteers to ensure clarity and convenience of the questions and to estimate the time needed to fill the questionnaire.
Study outcomes
Awareness of vitamin D was defined as the ability to positively answer 4 questions about ever hearing of vitamin D, awareness of the importance of vitamin D for health, awareness of the symptoms of vitamin D deficiency, and awareness of at least one vitamin D sources, including diet and sun exposure. Vitamin D intake was defined as actual intake of at least two sources of vitamin D, including sun exposure, vitamin D rich diet, and vitamin D supplements.
Statistical analysis
Data were presented as frequencies and percentages. Demographic characteristics and medical history were compared between groups defined by the study outcomes; overall awareness of vitamin D and intake of vitamin D. Chisquare or Fisher exact tests, as appropriate, was used to detect significant differences. The association of overall awareness and intake of vitamin D, overall and stratified by gender, were done using Chi-square and Mantel-Haenszel Chi-square. All P-values were two-tailed. P-value <0.05 was considered as significant. SPSS software (release 23.0, Armonk, NY: IBM Corp) was used for all statistical analyses.
Results
A total of 500 study participants were included in the current analysis. Demographic characteristics and medical history of the study participants are shown in Table 1 The awareness of vitamin D is shown in Table 2 and The intake of vitamin D rich sources is shown in Table 3.
Approximately half (51.2 per cent) of the participants reported eating vitamin D rich foods such as milk, oily fish, and eggs. The majority (83.5 per cent) of the participants were drinking one or two cups of milk every day. Only 18.8 per cent of the participants were taking vitamin D supplements and 19.6 per cent were taking multivitamins. More than half (57.2 per cent) of the participants reported exposing face, arms or legs (whenever possible) to sunlight within the last year. This was less than 5 minutes in 43.0 per cent of the participants and between 5 and 15 minutes in 30.4 per cent of the participants. Only 17.2 per cent of the participants were using sunscreen when exposed to the sunlight. As shown in Figure 2, out of the 3 common sources of vitamin D (diet, sun exposure, and supplements) 19.2 per cent of the participants were taking none of them. On the other hand, 80.8 per cent were taking at least one source, 41.2 per cent were taking at least 2 sources, and only 5.2 per cent were taking all the 3 sources.
The associations of patients' characteristics with both awareness of vitamin D and its intake are shown in Table 4. The overall awareness (as shown in Figure 1) was significantly higher among the middle aged group compared to other age groups (76.9 per cent versus 54.9 per cent, p<0.001), females compared to males (78.9 per cent versus 54.9 per cent, p<0.001), graduates compared to nongraduates (71.3 per cent versus 58.3 per cent, p=0.003), and having compared to lack of previous history of vitamin D deficiency (83.7 per cent versus 52.9 per cent, p<0.001). The intake of at least 2 sources of vitamin D was significantly higher among males compared to females (85.7 per cent versus 74.9 per cent, p=0.002). As shown in Figure 3, this was caused by the lower sun exposure among females than males (41.9 per cent versus 70.0 per cent, p<0.001). Table 4 shows also marginally significant (p>0.05 but <0.010) trends of higher vitamin D intake in pregnancy and breastfeeding (85.4 per cent versus 72.6 per cent, p=0.087) and in case of lack of previous history of vitamin D deficiency (83.5 per cent versus 77.0 per cent, p=0.070).
As shown in Figure 4, there was a significant association between overall awareness of vitamin D and intake of at least 2 sources of vitamin D for the all included participants, with aware participants having higher intake (45.0 per cent versus 33.9 per cent, p=0.012). However, when the same association is repeated by gender, it become stronger and more significant in males (57.3 per cent versus 33.3 per cent, p<0.001) but non-significant in females (34.6 per cent versus 35.4 per cent, p=0.920).
Discussion
The finding of the current study showed that approximately two-thirds of the participants have heard of vitamin D, and were aware of its importance, aware of vitamin D deficiency symptoms, and aware of at least one vitamin D sources. Comparing the current finding to the data previously reported in Saudi Arabia is challenging due to the variability in the populations examined, and tools used in this and previous studies. [13][14][15]18 Actually one of these studies used qualitative approach in studying the awareness. 15 [AMJ 2017;10(12):1051-1060] Nevertheless, individual awareness items in the current study are better than seen in previous studies in Saudi Arabia. For example, those who heard of vitamin D were approximately 90 per cent in the current study compared with 70 per cent in adult patients attending different clinics in Western region 18 and approximately 30-64 per cent among healthy children and adolescents in Riyadh. 13,14 Similarly, those who were aware of sun exposure and/or diet as sources of vitamin D were 88 per cent in the current study compared with 51-76 per cent in previous studies. 13,14,18 The better awareness of vitamin D observed in the current study may be related to the better educational level (more than half of our participants were graduates) and more health-oriented primary care population than populations examined in previous studies, including children and adolescents. 13,14,18 As expected, the awareness in the current study was higher among more educated participants. 19 The better awareness observed in the current study among those who had history of vitamin D deficiency may be related to more exposure to information while seeking medical advice. Similar to previous studies, relatives/friends and physicians were the main source of information about vitamin D. 18 However, the finding also highlights the minor role played by the school and media in raising the public awareness of vitamin D.
Approximately 41.8 per cent of our participants reported a positive history of vitamin D deficiency. Unfortunately, we did not measure the vitamin D level in our participants to confirm the actual prevalence of vitamin D deficiency. As expected, the history of vitamin D deficiency was more frequent in females than males. A recent meta-analysis of 13 studies done over the last 10 years among more 24,000 Saudi adults, children, and pregnant women showed that the prevalence of vitamin D deficiency ranged between 50 per cent and 95 per cent with an average of 81 per cent. 8 Interestingly, all the studies included in this meta-analysis and reported gender-specific prevalence showed much higher prevalence of vitamin D deficiency among females than males, even in childhood and adolescence. 8 The current study showed inadequate intake of vitamin D.
The most common source of vitamin D in our participants was sun exposure (57.2 per cent), followed by vitamin D rich foods (51.2 per cent) and supplements (18.8 per cent). Actually only 5.2 per cent were taking the 3 sources and 41.2 per cent were taking at least 2 sources. The finding is not surprising given the high vitamin D deficiency and lower intake reported before in Saudi Arabia. 9,20 However, some previous studies could not confirm the association between intake of some vitamin D sources (specially dietary sources) and the presence of vitamin D deficiency. 18,21 As expected, the intake of vitamin D rich sources in the current study was lower in females than males. This was due to lower sun exposure rather than dietary sources or supplementations. The lower sun exposure among females in Saudi Arabia has been documented before, even among children. 9,10,13 This has been linked to cultural, lifestyle, and religious reasons that limit female outdoor activities and demand wearing complete body cover usually of black colour when in public 9,10,13 Additionally, non-gender specific factors in Saudi Arabia such as very hot weather that limit outdoor activities and the generally dark skin colour that limit the penetration of sun light contribute to the problem of vitamin D deficiency in Saudi Arabia. 10,22 The current finding showed that awareness in associated with vitamin D intake in males but not females. The nontranslation of awareness into action in females in the current study may be explained again by the same cultural, lifestyle, and religious reasons that limit sun exposure among females compared with males. Since these reasons are difficult to modify and since supplementation was very low in both gender, the current findings further highlight the critical importance of vitamin D supplementation specially among Saudi females and other at risk groups. 23,24 Supporting this recommendation, less than 20 per cent of the participants in this study were receiving vitamin D supplementations or multivitamins. Additionally, more strict regulation demanding fortification of dairy products, cereals and orange juice may be also required in Saudi Arabia to counter the limited dietary intake of vitamin D. 25 To our knowledge, the current study is considered the first study to report gender-specific associations between awareness and intake of vitamin D. This was done among a relatively large sample size recruited from 6 primary care centres. Nevertheless, some limitations are acknowledged. For example, since the study design involved self-reported cross-sectional data collection, causation cannot be confirmed. Additionally, the convenience sampling used in recruitment of our participants may limit the generalization of findings. However, we believe that the current findings are good addition to the field of vitamin D research in Saudi Arabia and the above limitations are very minor and found in all previous awareness studies.
Conclusion
In conclusion, we are reporting a relatively good awareness but lower intake of vitamin D rich sources among a group of adult males and females at a primary care setting. The awareness was associated with vitamin D intake in males but not females, mainly due to lower sun exposure in females than males. As supplementation was very low in both gender and since cultural factors promoting limited sun exposure among females are not easily modifiable, the current findings further highlight the critical importance of vitamin D supplementation especially among females and other at risk groups in Saudi Arabia. Additionally, there is a need to promote the role played by the school and media in raising the public awareness of vitamin D. | 2018-09-23T00:18:02.054Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "c390bca4722f1e35346a41f829d16cb5572cca23",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "Adhoc",
"pdf_hash": "44db83fb73451f4d07224805972a78ba5ca9e89a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27694488 | pes2o/s2orc | v3-fos-license | Spatial Autocorrelation, Source Water and the Distribution of Total and Viable Microbial Abundances within a Crystalline Formation to a Depth of 800 m
Proposed radioactive waste repositories require long residence times within deep geological settings for which we have little knowledge of local or regional subsurface dynamics that could affect the transport of hazardous species over the period of radioactive decay. Given the role of microbial processes on element speciation and transport, knowledge and understanding of local microbial ecology within geological formations being considered as host formations can aid predictions for long term safety. In this relatively unexplored environment, sampling opportunities are few and opportunistic. We combined the data collected for geochemistry and microbial abundances from multiple sampling opportunities from within a proposed host formation and performed multivariate mixing and mass balance (M3) modeling, spatial analysis and generalized linear modeling to address whether recharge can explain how subsurface communities assemble within fracture water obtained from multiple saturated fractures accessed by boreholes drilled into the crystalline formation underlying the Chalk River Laboratories site (Deep River, ON, Canada). We found that three possible source waters, each of meteoric origin, explained 97% of the samples, these are: modern recharge, recharge from the period of the Laurentide ice sheet retreat (ca. ∼12000 years before present) and a putative saline source assigned as Champlain Sea (also ca. 12000 years before present). The distributed microbial abundances and geochemistry provide a conceptual model of two distinct regions within the subsurface associated with bicarbonate – used as a proxy for modern recharge – and manganese; these regions occur at depths relevant to a proposed repository within the formation. At the scale of sampling, the associated spatial autocorrelation means that abundances linked with geochemistry were not unambiguously discerned, although fine scale Moran’s eigenvector map (MEM) coefficients were correlated with the abundance data and suggest the action of localized processes possibly associated with the manganese and sulfate content of the fracture water.
Proposed radioactive waste repositories require long residence times within deep geological settings for which we have little knowledge of local or regional subsurface dynamics that could affect the transport of hazardous species over the period of radioactive decay. Given the role of microbial processes on element speciation and transport, knowledge and understanding of local microbial ecology within geological formations being considered as host formations can aid predictions for long term safety. In this relatively unexplored environment, sampling opportunities are few and opportunistic. We combined the data collected for geochemistry and microbial abundances from multiple sampling opportunities from within a proposed host formation and performed multivariate mixing and mass balance (M3) modeling, spatial analysis and generalized linear modeling to address whether recharge can explain how subsurface communities assemble within fracture water obtained from multiple saturated fractures accessed by boreholes drilled into the crystalline formation underlying the Chalk River Laboratories site (Deep River, ON, Canada). We found that three possible source waters, each of meteoric origin, explained 97% of the samples, these are: modern recharge, recharge from the period of the Laurentide ice sheet retreat (ca. ∼12000 years before present) and a putative saline source assigned as Champlain Sea (also ca. 12000 years before present). The distributed microbial abundances and geochemistry provide a conceptual model of two distinct regions within the subsurface associated with bicarbonate -used as a proxy for modern recharge -and manganese; these regions occur at depths relevant to a proposed repository within the formation. At the scale of sampling, the associated spatial autocorrelation means that abundances linked with geochemistry were not unambiguously discerned, although fine scale Moran's eigenvector map (MEM) coefficients were correlated with the abundance data and suggest the action of localized processes possibly associated with the manganese and sulfate content of the fracture water.
INTRODUCTION
A goal of ecology is to relate population densities from within a region of interest to local or regional environmental conditions, however, analyses of spatially distributed sampling locations can be complicated by autocorrelation (Dormann et al., 2007;Gilbert and Bennet, 2010) or a lack of independence between nearby sampling locations. This characteristic, if not recognized, can lead to incorrect conclusions for population and environment interrelationships. When modeling population densities within a region of interest, autocorrelation can be caused by, for example, distance relationships in biological processes such as dispersal, by assuming an incorrect relationship between abundances and environment within a model, or by not accounting for an important environmental determinant that in itself is spatially structured and thus causes spatial structuring in the response (Dormann et al., 2007). Discovery of distance-relationships associated with biological processes provides an important and interesting insight on community patterns while the assumptions made when modeling population abundances can lead to incorrect conclusions by having model residuals that are not randomly distributed, and so are themselves autocorrelated (Dormann et al., 2007).
Within the volume of proposed geologic repositories for hosting waste with inventories of long-lived radionuclides, information on the microbial abundances within an undisturbed setting at depth can help formulate conceptual models for longterm subsurface dynamics over the expected inventory decay period. A microbial community is defined as an assemblage of potentially interacting taxa that co-occur over space and time (Nemergut et al., 2013). Differences in abundances over space and time can occur through a combination of processes such as by abiotic selection and biotic competition or by speciation and drift between unconnected communities (Hubbell, 2001;Vellend, 2010). Microbial distributions in natural water systems also tend to be dispersed (Bliss and Fisher, 1953;El-Shaarawi et al., 1981;Haas and Heller, 1986;Hilbe, 2011;Harrison, 2014); occurring as clusters of cells or associated with suspended particles.
In this study, distributions of the total and viable count data and the geochemistry data were derived from sampling multiple saturated fractures that were accessed from boreholes drilled into overlapping bedrock assemblages underlying the Chalk River Laboratories (Deep River, ON, Canada) site. Data collection was part of a siting assessment for a potential future geologic waste management facility at the CRL site (Thompson et al., 2011). The locations of these boreholes are shown in Figure 1. Previous studies performed within this formation (Stroes-Gascoyne et al., 2011;Beaton et al., 2016) showed that bacterial taxa were numerically dominant in the fracture water and that these bacteria displayed nitrogen metabolism with episodes of sulfur metabolism. This finding is akin to other crystalline subsurface environments hosting microbial communities that display metabolic activity such as nitrate, iron and sulfate reduction (Kieft, 1990;Jain et al., 1997;Haveman et al., 1999;Sahl et al., 2008;Nyyssönen et al., 2012). Although the bacteria were mainly uncultured, the closest cultivated representatives were from the phenotypically diverse Betaproteobacteria, Deltaproteobacteria, Bacteroidetes, Actinobacteria, Nitrospirae, and Firmicutes. Hundreds of taxa were identified but only a few were found in abundance (>1%) across all 16S rRNA assemblages. A decay of phylogenetic similarity with distance up 1.5 km was evident within sampling locations separated by up to 5 km of rock (Beaton et al., 2016). We propose that this decay distance is related to dispersal within vertical oriented fractures. To test for the possible influence of recharge and metabolism on total and viable abundances we extend our findings for nitrogen metabolism and sulfate reduction (Stroes-Gascoyne et al., 2011) and for the distance decay of similarity (Beaton et al., 2016) by analyzing the relative influences of the fracture water on microbial abundances and viability; an aspect of this subsurface habitat that had not been evaluated previously. Isotopic analysis of the dilute fracture water indicates it is of meteoric origin -with no significant rock-water interactions ; Supplementary Figure S1 shows the stable isotope composition for hydrogen and oxygen in the fracture water relative the Vienna Standard Mean Ocean Water (VSMOW). This recharge provides a possible source of soluble species for microbial processes and is a medium for dispersal. Porewater analysis from rock cores identified nitrogen compounds within the porewater composition that were not detected within the fracture water, so despite the stable isotope compositions relative to VSMOW (Supplementary Figure S1), rock-water interactions relevant to microbial abundances may still be occurring.
To gauge interrelationships between subsurface microbial abundances with the geochemistry we combined the abundance and geochemical data from multiple sampling opportunities and performed modeling to address whether recharge can explain how subsurface communities assemble within these fractures. The chemical species within the fracture water were evaluated for their significance as explanatory variables by a multivariate approach in which the fracture water compositions were compared with the compositions of known and derived compositional end-members. The explanatory power of the end-member compositions provide insight into probable source waters and, therefore, insight into the history of recharge, mixing and other geological processes (Laaksoharju et al., 1999;Laaksoharju et al., 2008) that may have shaped the current fracture water compositions. Moran's I was used to determine spatial autocorrelation between sampling locations and the fracture water components were evaluated by a generalized linear model (GLM) (Venables and Ripley, 2002) for their significance as possible metabolic substrates associated with microbial abundances. Positive Moran's eigenvector map (MEM) coefficients were included as independent variables in the GLM to gauge for spatial autocorrelation within the model residuals.
Fracture Water Sampling and Analysis
Fracture water was collected using a Westbay TM Multilevel Groundwater Monitoring System (Schlumberger Water Services). Supplementary Figure S2 shows a schematic of a borehole with an installed Westbay System R . This Figure illustrates how the Westbay tubing and packers isolate multiple zones within the borehole thus preventing unnatural vertical fracture water flow within the borehole itself. The tubing fluid is isolated from the formation fluid. In this arrangement, ambient formation fluid flow can pass through the annulus. From inside the tubing, formation fluid can be accessed by lowering a Westbay sampler and container assembly (also shown) to normally closed valved ports positioned between the packers. A larger schematic illustrates a deployed Westbay sampler assembly that is engaged at a selected port. Once the sampler is positioned and engaged, the remotely operated control valve in the sampler is opened to allow formation fluid from the zone to flow into the empty container. The process is monitored by observing changes in fluid pressure during the sequence of operations (see a typical trace of pressure vs. time in Supplementary Figure S2). Once the container is filled, the sampler valve is closed to seal the formation fluid inside the container at in situ pressure. The assembly is disengaged from the port (the port valve automatically closes) and the fluid in the sealed container is retrieved to the surface for further handling.
The fracture water sampler consists of four 250 mL stainless steel tubes connected in series by tubing and Swagelok fittings. Prior to each sampling, the tubes were sterilized by autoclave and the fittings were sterilized by washing them with 70% ethanol. Validation of the sterilization and transport procedures was performed using sterilized water and PCR with bacterial rRNA 16S primers (Muyzer et al., 1993). Since the tube assemblies contacted only the interior of the casing surface, the probability of introducing surface microbes into the sampled volumes was minimal.
The borehole locations within the study site region of interest, and their names, are shown in Figure 1. These sampling locations are situated between the geological boundaries created by the Maskinonge Lake fault, the Mattawa fault (Ottawa River) and by East-West trending diabase dykes that traverse the study site along the boreholes CR-9, CRG-3 and CRG-6. Fracture water was collected from sealed borehole CRG-1, CRG-2, CRG-3, CRG-4A, CRG-6 and CR-9. Fracture water from an open unsealed borehole, CR-18, was also sampled. Depths of the sampled fracture water ranged from 35 to 780 m (137 to −800 m elevation, relative to sea level).
The fracture water pH [Beckman PHI 265 pH/Temp/mV meter (Beckman Coulter, Inc.)] and conductivity [YSI Model 30 Conductivity Meter (YSI Inc., Yellow Springs, OH, United States) were measured and aliquots for elemental analysis were filtered through a 0.45 µm filter (isopore polycarbonate, Millipore, Billerica, MA, United States) then immediately preserved in nitric acid (ultra-trace grade, Seastar TM , Baseline R , Fisher Scientific, Ottawa, ON, Canada). Elemental composition of the fracture water was determined by inductively coupled plasma-mass spectrometry (ICP-MS, using either a Varian 820-MS (Agilent Technologies, Inc.) or an Element XR (Thermo Scientific)) and by inductively coupled plasma atomic emission spectroscopy (ICP-AES, Optima 3300, Perkin Elmer). Anion concentrations were determined using a Dionex 3000 ICS ion chromatograph (Dionex, Sunnyvale, CA, United States). Dissolved organic (DOC) and inorganic carbon (DIC) were determined using a Dohrmann, model Phoenix 8000-UV Persulfate TOC Analyzer (Teledyne Teckmar, Mason, OH, United States).
Total and viable microbial densities were determined by fluorescence microscopy with a Nikon E600 microscope and a Zeiss Axiophot microscope after filtering the separate stained samples onto black polycarbonate filters (Fisher Scientific, 25 mm, 0.22 µm pore size); at least fifteen fields of view and at least 300 cells were counted per filter for a coefficient of variation of 5.8% per filter. Direct counts for total cell densities were determined in triplicate 1 mL volumes -within 4 h of sampling at the formation pressure and within 1 h of opening the sample tubes including a 30 min incubation time. Total cell densities were determined using the DNA intercalating dye, Sybr Green II (Life Technologies); because separate aliquots were shipped to a another laboratory, total cell counts were also determined within 24 h of opening the sample tubes, in this case, using Acridine Orange (Sigma-Aldrich) to emulate the procedure employed at the receiving laboratory; the two dyes and two time points gave similar results. Direct counts for viable cell densities were determined in triplicate 1 mL volumes, also within 1 h of opening the sampling tubes, using dyes that are sensitive to different characteristics of viable microbial cells: the soluble 5-cyano-2,3ditolyl tetrazolium chloride (CTC, Sigma-Aldrich) was used to evaluate respiratory activity within the microbial population as detected by the reduction of CTC to the insoluble fluorescent CTC-formazan (Schaule et al., 1993); the lipophilic cation, rhodamine-123 (R123 Sigma-Aldrich) (McFeters et al., 1998;Fuller et al., 2000) was used to evaluate cells within the microbial population that display a membrane potential difference; and carboxyfluorescein diacetate (CFDA, Sigma-Aldrich) was used to evaluate enzymatic activity (Schaule et al., 1993).
Three end-members were found to describe the fracture water; these are referred to as: (1) 'recharge, ' (2) 'Champlain Sea' (or 'saline'), and (3) 'glacial melt' (not shown). The stable isotope values for the melt water end member were taken from the literature: the δ 18 O value from Frape and Fritz (1987) and Remenda et al. (1994). The deuterium value was determined by Rozanski et al. (1993). The tritium value, which governs the proportion of recharge, was considered decayed to zero. The endmember referred to as 'Champlain Sea' was obtained from nearby sediment pore water from this period in the site history that had a salinity of 6.1% (Torrance, 1988). The end-member referred to as modern recharge was calculated as an average of the chemistry of the upper section of boreholes CRG-2, CRG-3, CRG-4A and CRG-6-1 and CR-9-1. The software, Surfer (Golden Software), was used to create 2D cross section maps.
Generalized Linear Model with a Negative Binomial Distribution
The replicate values for microbial cell densities and geochemistry were averaged for each borehole interval sampling location. Supplementary Figure S3 shows a comparison of quantileby-quantile plots for the total cell count distribution against theoretical normal and negative binomial distributions. The environmental and spatial data were evaluated as explanatory variables using the glm.nb() function from the R package 'MASS' (Venables and Ripley, 2002). Significant variables were determined by stepwise modeling. Model selection was based on minimizing the Akaike information criterion (AIC). Analysis of variance was applied to the reduced model to determine the significance of the retained variables. Only those values with p < 0.05 were considered significant. All of the model results are provided in an Excel file in the Supplemental Information.
Spatial Autocorrelation and Moran's Eigenvector Map Coefficients
Moran's eigenvector map were created by principle coordinates of neighbor matrices (Borcard and Legendre, 2002;Dray et al., 2006) from within the R packages 'sdep' and 'adespatial'. A matrix of spatial eigenvectors was built from a distance matrix of Easting and Northing, zone 18, Universal Transverse Mercator coordinates for each borehole interval. The functions used to create the spatial weightings matrix were nbtri(), that converts the spatial coordinates of the sampling locations into a distance neighbors map, and the function nb2listw() that creates the weightings matrix from the neighbors map. The eigenvectors for positive values for Moran's I reveal different spatial structures over the entire range of scales encompassed by the geographical sampling area. The first MEM values generated in the analyses represent broader spatial structures, and the last MEM values represent finer spatial structures. Values for Moran's I at each sampling location were compared to a null distribution of the global Moran's I using the function localmoran(). The resulting z-values were plotted to display locations with spatial correlations that were more than two standard deviations from the null mean.
Multivariate Mixing and Mass Balance (M3) Modeling
The results from the PCA are shown in Figure 2. The PCA results are displayed three times to illustrate modeling results for mixing of the three fracture water compositional endmembers; these were the percent mixing proportion for glacial melt water (Figure 2, upper left panel), the percent mixing proportion for Champlain Sea (Figure 2, upper right panel), and the percent mixing proportion for modern recharge (Figure 2, bottom panel). The first and second principal components accounted for 71% of the variance in the geochemistry. The area encompassed by the three end-members (Figure 2, triangle joining the three compositional end-members) explains over 97% of the fracture water samples; most of the individual fracture water compositions plot between the three reference waters. The fracture water samples that plot outside the region of the three end members are listed as open circles (Figure 2, all three panels). The explanatory power of glacial melt, Champlain Sea and modern recharge may indicate that these waters have affected the present fracture water composition and thus represent historical events that could have influenced the fracture water microbial populations. The modeled mixing proportions of the three source waters suggests that fracture water sampled from boreholes CRG-1, CRG-2, CRG-3, CRG-4A, CRG-6, and CR-18 contain mainly modern recharge with a small glacial melt mixing proportion of up to ∼40%. Fracture water sampled from borehole CR-9 includes proportions from these source waters and an additional mixing proportion from a saline water source, referred to here as Champlain Sea. Fracture water accessed from intervals 11 and 12 from borehole CR-9 have mainly a saline water type signature of ∼70%. By this model, the fracture water from CR-9-3, CR-9-8 and CR-18 is a mixture of Champlain Sea, melt water and modern water. Distributions of the three possible water sources is represented in cross section in Figure 3 the sampling locations within these boreholes that were used for microbial abundance determinations are shown in Figure 3 as white dots. The visualizations were created by 2-D kriging interpolating between the sampling locations within each of these boreholes and do not account for the fractures that would provide the water flow paths throughout the rock matrix. The left-hand side of Figure 3 shows the mixing proportion by prospective source water and the right-hand side of the Figure 3 shows the distributions of geochemical signatures that correspond to these water sources: bicarbonate (Figure 3, top right panel) for modern recharge; measured δ 18 O values (Figure 3, middle right panel) for glacial melt water and chloride (Figure 3, bottom right panel) for a saline source water. These components of the fracture water, therefore, may represent a signature for source water in a GLM.
Spatial Autocorrelation
Spatial autocorrelation refers to similarities in attributes between adjacent locations compared to the attributes between more distant locations (Miller, 2004). Spatial autocorrelation in abundance data can be informative of processes that drive community patterns. Spatial autocorrelation in model residuals, however, can lead to incorrect interpretation of the processes that drive community patterns. To test for spatial autocorrelation within the sampled fracture water, MEM coefficients were calculated and those coefficients associated with positive Moran's I were added to the GLM as independent variables. These coefficients may represent unknown processes occurring locally within the projected area. Local values for Moran's I were also compared with a null distribution of the global Moran's I to identify attributes at sampling locations (for example cell counts) with Moran's I values that were more than two standard deviations from the null mean. A local Moran's I for an attribute that is more than two standard deviations from the null mean in the positive direction indicates that the spatial distribution of that attribute is more clustered than would be expected if underlying spatial processes were random; in this case, the null hypothesis of random distribution of a given 'attribute' would be rejected. A local Moran's I for an attribute that is more than two standard deviations from the null mean in the negative direction indicates the spatial distribution of high and low values for that FIGURE 3 | M3 results for source water end members (left) and a candidate signature associated with the source water (right). The results are shown in cross section referenced to the boreholes CR-9, CRG-1, and CRG-3. attribute were more spatially dispersed than would be expected if underlying spatial processes were random; in this case, the null hypothesis would also be rejected.
The z-values calculated for the distribution of the various 'attributes' -namely, the cell count densities, concentrations of soluble compounds, pH and the positive MEM coefficients -are shown in Figure 4; the sampling locations are listed by borehole and interval following a West-to-East direction from borehole CR-18 to borehole CRG-2 (see Figure 1). The dashed gray lines and the solid gray lines mark where the first and second standard deviations from the null mean lie. The bars for attributes that extend beyond the mark for the second standard deviation, in the positive or negative direction, identify the sampling locations with spatially non-random attributes. From the plots in Figure 4, the deeper sampling locations within borehole CR-9 at intervals 8, 11, and 12, display non-random attributes relative to the global distribution of cell counts, or clustering as lower total cell counts; lower bicarbonate concentrations and higher sulfate and manganese concentrations; and by the MEM coefficients labeled MEM5, MEM7 and MEM10. These intervals are also the sampling locations with a saline signature (Figures 2, 3, bottom left panel); even so, the chloride was not identified as being spatially autocorrelated.
The total cell counts from the shallow sampling location within the same borehole, located at interval 2, is dispersed compared to the null mean of spatially distributed counts values;
Distribution of the Count Data
Values for total and viable cell densities in fracture water sampled from each borehole are provided in Supplementary Table S1. How the cell densities distribute across the sampling locations is shown in Figure 5 as histograms and as boxplots by borehole arranged in a West to East direction (from borehole CR-18 to borehole CRG-2 as shown in Figure 1). The cell densities within boreholes CR-18, CR-9, CRG-1 and CRG-2 form the lower density part of the histograms and the cell densities within boreholes CRG-3, CRG-6 and CRG-4A form the higher density part of the histograms. The same data is plotted as scatter plots by sampling location elevation relative to sea level (Supplementary Figure S4 To help identify possible drivers for the microbial abundances, the distribution patterns for the total and viable cell densities were also compared with the distributions of the fracture water geochemistry (using data taken from Supplementary Table S1) and to the rock porewater components: sulfate, bicarbonate, ammonia, nitrate and nitrite (from Peterman et al., 2016). The resulting quantile-by-quantile plots are shown in Supplementary Figures S5-S8 and in Figure 5 beside the histograms for the cell densities. Quantile-by-quantile plots allow for the distribution patterns between two datasets to be compared; if the datasets follow a similar distribution the data points plot along a straight line; if the datasets do not follow a similar distribution pattern, the points diverge from a straight line. We find from these comparisons that the microbial cell densities distribute within the subsurface like that for the fracture water and porewater sulfate, the porewater ammonia and for the fracture water manganese, but not for the fracture water bicarbonate or for the porewater bicarbonate.
Supplementary Figure S5 show the cell density distributions with the fracture water and porewater sulfate. The total cell count data appear to have a distribution like that for the fracture water sulfate while the CTC and R123 cell density distributions deviate from the straight line (Supplementary Figure S5, top panel). The opposite patterns are seen for porewater sulfate; FIGURE 5 | Distribution of total and viable cell counts. The boxplots are listed by borehole in a West-to-East direction starting with borehole CR-18 (see Figure 1). Also shown are quantile-by-quantile plots for cell density distributions compared to the distributions for fracture water manganese, sulfate and bicarbonate.
the distributions for total cell count and the porewater sulfate deviate from a straight line while the CTC and R123 cell densities appears to have distributions like that for the porewater sulfate (Supplementary Figure S5, bottom panel).
Comparisons with the distributions of fracture water and porewater bicarbonate (Supplementary Figure S6) and of fracture water manganese (Supplementary Figure S7) show that the count data distributions are not like that for bicarbonate in the lower quantiles (Supplementary Figure S6, top panel for fracture water, bottom panel for porewater) but they are distributed that for like manganese (Supplementary Figure S7).
Supplementary Figure S8 show the quantile-by-quantile plots for the distributions of cell density the porewater nitrogen compounds: ammonia, nitrite and nitrate. These components of the porewater were not detected within the fracture water. The plot shows that the distributions for total and CTC cell densities are roughly linear with the distribution for ammonia (Supplementary Figure S8, top panel). The total, CTC and R123 cell densities are also roughly linear with the distribution of nitrate concentrations but there is flattening in the middle quantiles for nitrate. The plot comparing the cell densities with porewater nitrite suggest these datasets follow different distributions across the formation (Supplementary Figure S8, bottom panel).
Generalized Linear Modeling of the Count Data
The geochemical and descriptive data used for the GLM are given in Supplementary Table S1. The negative binomial GLM function within the R package 'MASS' (Venables and Ripley, 2002) provides a model to assign linear predictors (β) and a description of the random error distribution of the count data. The total and viable count data across all the sampling locations were each used as response variables. Data for the geochemical and positive MEM coefficients across all sampling locations were used as the dependent variables. Unmeasured environmental variables associated with the microbial cell density distribution would form part of the random component of the resulting linear model.
The independent variables were evaluated first for model selection then stepwise model fitting was performed. Metabolically relevant components of the geochemistry were: pH; dissolved organic carbon (DOC); bicarbonate; sulfate; iron; manganese; and phosphate. Bicarbonate ion is also a possible signature for modern recharge (Figure 3). Data for chloride ion were included as an explanatory variable for a saline source water component, and the stable oxygen isotope (δ 18 O) data were included as an explanatory variable for a glacial melt source water component. The spatial weightings matrix identified 11 positive MEM coefficients; these were also included in a model. The resulting coefficients (β) and the 5% confidence interval values for the significant explanatory variables are listed in Supplementary When the models were run with the geochemistry -without the positive MEM coefficients -bicarbonate and manganese were identified as the predictors of total (Supplementary Table S2), CTC (Supplementary Table S3) and R123 (Supplementary Table S4) cell counts. When the model was run with the positive MEM coefficients -without the geochemistry -between two and four coefficients were significant: MEM2 and MEM4 were identified for both total and viable cell counts; and either MEM1, MEM5, MEM7 or MEM10 were identified depending on the count data (Supplementary Tables S2-S5). An analysis of the model residuals for the total count data are provided in Figure 6: environmental variables (Figure 6A), the positive MEM coefficients ( Figure 6B) and the measured variables and spatial coefficients ( Figure 6C). These plots show that the distribution of the residuals and fitted total count are more randomly distributed when the positive MEM coefficients are included in the model ( Figure 6B) than when the model included only environment ( Figure 6A). Combining spatial and environmental inputs did not improve the distribution of the model residuals ( Figure 6C).
DISCUSSION
The concept of geological radioactive waste repositories is to provide secure locations over long residence times within deep geological settings allowing for the decay of long-lived radionuclides to background levels. The feasibility of emplacing a repository relies on having knowledge of local and regional subsurface dynamics that would form the basis of predicting the transport of hazardous species from a repository over the period of radioactive decay. Given the role of microbes on element speciation and transport and their effect on element retention by microbial-derived metal oxides (Kennedy et al., 2011), knowledge and understanding of local microbial ecology within prospective host formations can aid predictions used for making long term safety cases. To gauge interrelationships between subsurface microbial abundances with the geochemical data we combined the abundance data obtained from multiple sampling opportunities from within the crystalline formation underlying the Chalk River Laboratories site (Deep River, ON, Canada) and performed multivariate mixing and mass balance (M3) modeling, spatial analysis and GLM. We considered the dual role of fracture water -as the medium for transport of soluble species suitable for microbial metabolism and its role as the medium for dispersal.
In our analyses, we identified possible sources of the fracture water and evaluated the fracture water composition as predictors of total and viable microbial cell densities within these fractures including their distribution patterns. The dilute character of the fracture water compared to other sites on the Canadian Shield is thought to reflect recharge that occurred at the end of the last glaciation followed by a gradual recharge with meteoric water. The fracture water ages date from 5000 to 10,000 years . Modern recharge does not appear to extend deeper than approximately 100 m . We therefore performed modeling to address whether recharge can explain how subsurface communities assemble within these fractures. The main findings from M3 modeling were that three possible meteoric source waters account for 97% of the samples: glacial melt water, a saline source and modern recharge. The mixing proportion for modern recharge and glacial melt water describe most of the samples; the mixing proportion of a saline source water is localized to deeper fractures transected by boreholes CR-9 (Figure 3).
Although the stable isotope data for oxygen and hydrogen align with the VSMOW (Supplementary Figure S1), supporting the notion of recharge as a main driver of microbial assembly, rock water interactions may still be important in explaining the fracture water microbiology. A porewater analysis of the drilled rock cores identified nitrogen compounds that were not detected within the fracture water; a finding that corroborates both the measured nitrogen metabolism within the fracture water (Stroes-Gascoyne et al., 2011) and the identified taxa within the fracture water (Beaton et al., 2016) whose cultured relatives encompass the complete nitrogen cycle, including nitrogen fixation. In a study of the component taxa (manuscript in preparation), nitrogen metabolism was detected within all sampling locations; sulfate reduction was detected only within borehole CRG-6.
Recharge into fractures is topology driven. An analysis of the cell density distribution patterns identified location specific patterns and patterns that were generalized across the sampling locations. The total and viable cell densities fell into two categories: those locations with lower cell densities (location within boreholes CR-9, CR-18, CRG-1, CRG-2) (Figure 5 and Supplementary Figure S4) and those locations with higher cell densities (locations within boreholes CRG-3, CRG-4A and CRG-6). The sampling locations with the lowest total cell counts were those locations that had the highest mixing proportions of a saline source water (Figure 2) and the sampling locations with the highest total cell densities were those locations with higher mixing proportions of modern recharge (Figure 2). Variation in the count data appears to be localized to the region around each of the boreholes and not to the elevation of the sampling locations (Supplementary Figure S4). If the abundance data can be linked to recharge, the influences of local conditions on recharge may need to be accounted for by, for example, overburden thickness and hydraulic conductivity. In a study of modern recharge into another fractured crystalline aquifer that is overlain by variable thicknesses of overburden (Gleeson et al., 2009), the authors conclude that overburden thickness and hydraulic conductivity were major parameters that controlled modern recharge into the underlying bedrock aquifer and that a thicker overburden meant modern recharge was slower and more widespread (Gleeson et al., 2009). A slower recharge rate and higher surface area of unconsolidated overburden would favor higher cell densities.
The distributions of the abundances and the geochemistry were more generalized across the site. The quantile-by-quantile plots show that the total cell count distributions aligned with the distributions of fracture water sulfate, fracture water manganese, porewater sulfate, porewater ammonia and porewater nitrate (Figure 5 and Supplementary Figures S5, S7, S8). The viable cell count distributions also aligned with the distributions of sulfate, manganese, ammonia and nitrate while their distributions compared with bicarbonate were heavy tailed in the lower quantiles -namely, those sampling locations within borehole CR-9; a region of the subsurface found to be distinct in the M3 modeling (Figure 3).
Analysis of spatially distributed sampling locations can reveal distance-relationships in abundance data (Dormann et al., 2007). The assumptions made when modeling population abundances can lead to incorrect conclusions if the model residuals are not randomly distributed (Dormann et al., 2007). We therefore performed a spatial analysis to test for a role for meteoric water recharge on total and viable abundances by comparing a null distribution of the global Moran's I value (Figure 4) and by adding the resulting eigenvector map coefficients into a GLM (Figure 6). The GLM identified bicarbonate and manganese as significant predictors of microbial abundances (Supplementary Tables S2-S5). Both bicarbonate and manganese also show spatial autocorrelation at sampling locations within borehole CR-9; as does sulfate (Figure 4). In our analysis, bicarbonate was considered as a proxy for modern recharge; the proxy for a saline source recharge, chloride, was not identified as significant (Supplementary Tables S2-S5) and, despite the M3 modeling showing the localization of this saline signature (Figures 2, 3), chloride was not spatially clustered with the bicarbonate, manganese and sulfate (Figure 4).
The GLM also identified positive MEM coefficients of which four coefficients clustered within borehole CR-9 and two coefficients, MEM2 and MEM4, were randomly distributed. The improved GLM residuals with these coefficients suggest that the significance of the bicarbonate and manganese was due to the localized and distinct fracture water conditions that exist within borehole CR-9 and further suggests that their significance in the GLM reflects this spatial correlation. Inclusion of the MEM coefficients within the GLM improved the distribution of the GLM residuals. The finer scale influences represented by these coefficients may indicate unmeasured/unknown processes; the distribution pattern similarities observed with the quantile-byquantile plots may help reconcile these processes.
CONCLUSION
The main findings of this work are that M3 modeling identified three possible meteoric sources water for recharge; of these three, modern recharge appears to be the most likely source water to explain, in part, microbial abundances within the projected area of the sampling locations. Chloride, as a proxy for a saline source water, was not a significant explanatory variable for the total or viable count data. Stable oxygen isotope (δ 18 O), as a gauge of glacial melt water, was also not a significant explanatory variable of microbial abundance distributions.
Spatial autocorrelation analysis show that low total cell counts co-localize with lower bicarbonate, higher manganese and higher sulfate. These locations are associated with the saline source water signatures. The spatial correlation of both the bicarbonate and the manganese suggest that their significance in the GLM reflects this spatial correlation and not a direct effect on microbial abundances, per se. Inclusion of positive MEM coefficients into the GLM improved the distribution of the model residuals. The finer scale influences represented by the significant MEM coefficients suggest there are unmeasured/unknown processes occurring within these sampling locations.
While the fracture water is dilute, and of mainly meteoric origin , the prospect of porewater sulfur and porewater nitrogen potentially leaching from the host rock suggest there may be localized processes that are separate from a role of source water recharge in explaining microbial abundance distributions within the projected area of the sampling locations. | 2017-09-20T21:40:26.536Z | 2017-09-19T00:00:00.000 | {
"year": 2017,
"sha1": "ae5ee61dc3e8899301c63367a01c456f7de2cc95",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.01731/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae5ee61dc3e8899301c63367a01c456f7de2cc95",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
198701869 | pes2o/s2orc | v3-fos-license | Physics of Anaesthesia Made Easy
Physics is an attempt to describe the fundamental laws of world around us. As anesthesiologists we deal with liquids and gases under pressure at varying temperature and volume. These inter relationships are simple, measurable and their understanding ensures a safe outcome for the patient. For the safe and efficient use of anesthesia apparatus, a basic knowledge of fundamental physics is must for a clear concept of their working principle. We have tried to simplify the basic physics related to anesthesia in a simplified way through the review article. Introduction Basic Concepts Units of Measurements (Table 1) Table 1: Units of measurements. Basic SI Units Derived Units Units not in SI system length (meter) temp (degrees Celsius) pressure (mmHg) mass (kilogram) force (newton) pressure (cmH2O) time (second) pressure (pascal/ bar) pressure (standard atmosphere) current (ampere) energy (electron volt) energy (calorie) temp (kelvin) power (watt) force (kilogram weight) luminous intensity (candela) frequency (hertz) amount of substance (mole) Volume (lliter) Simple Mechanics a) kilopascal = 7.5mmHg. b) 1 Bar = 750mmHg c) 1 kilopascal = 10.2cmH2O d) 1 std atmosphere = 101.325kPa e) 1 calorie = 4.18J f) 1-kilogram weight = 9.8N g) Pounds / inch2(PSI) -Atmospheric Pressure (1 PATM=14.7PSI) h) 1 Bar = 100kPa = Atmospheric pressure at sea level [1]. Pressure a) Force = mass x acceleration = kgms-2 = Newton b) Pressure = Force/Area c) 1 Pascal = I Newton acting over 1m2 Gauge pressure is defined as pressure which is measured when unknown pressure is measured relative to atmospheric pressure [2]. This pressure is used in measuring: a) Blood pressure b) Airway measurements In order for fluid to pass out of the barrel of the syringe the same pressure must be developed in the syringe. a) For a 20ml syringe (diameter 2cm) – pressure generated is 100kPa; even this is 6 times more than SBP of 16kPa (120 mmHg). So, during Biers block, pressure in the vein during rapid injection can exceed systolic pressure, particularly if a vein adjacent to the
Units of Measurements
) Gauge pressure is defined as pressure which is measured when unknown pressure is measured relative to atmospheric pressure [2]. This pressure is used in measuring: a) Blood pressure b) Airway measurements In order for fluid to pass out of the barrel of the syringe the same pressure must be developed in the syringe.
a) For a 20ml syringe (diameter 2cm) -pressure generated is 100kPa; even this is 6 times more than SBP of 16kPa (120 mmHg).
So, during Biers block, pressure in the vein during rapid injection can exceed systolic pressure, particularly if a vein adjacent to the cuff is present.
Fluid Mechanics
Flow is defined as amount of fluid or gas passing in unit time.
Flow becomes laminar to turbulent after Reynold number (defined
Laminar Flow
Flow moves in a steady state with no turbulence or eddies. Flow is greatest in the mid center and zero in peripheral wall. Hagen Poiseuille Equation is used to determine laminar flow, defined as:
Turbulent Flow
Turbulent flow denotes a situation in which the fluid flows in an unpredictable manner with multiple eddy currents which are not parallel to the sides of the tube through which they are flowing. b) Heliox (a mixture of 21% helium and 79% oxygen) is used to reduce density and thereby improve the flow and is used in respiratory tract obstruction. Helium is much less dense than nitrogen, which constitutes 79% concentration of air. In patients with upper airway obstruction, flow is through an orifice and hence more likely to be turbulent and dependent on the density of the gas passing through it. Therefore for a given pressure gradient (patient effort), there will be a greater flow of a low density gas (heliox) than a higher density gas (air).
c) There is laminar flow during quiet breathing which becomes turbulent during coughing and speaking thereby resulting in breathlessness or dyspnea.
d) According to, Hagen -Poiseuille's Law. flow is laminar at low flows in the flow meter, while at higher flows, the law applicable to turbulent flow is applicable [3].
Critical Flow
Critical flow for a typical anesthetic gas has approximately the same numerical value as the diameter of the airway concerned. If these are stored at high temperatures, pressures will raise causing explosions. b) Adiabatic changes: it is a change which doesn't involve transfer of heat (Q) or matter into and out of a system, so that Q = 0, and such a system is said to be adiabatically isolated.
Clinical Relevance: When a valve of an oxygen cylinder is opened suddenly, oxygen will rush into high pressure hose or stem of oxygen regulator and on reaching the end of hose, adiabatic process might occur. That suggests that local pressure is much
Solubility Mechanics
Solubility Saturated vapor pressure is defined as the partial pressure exerted by vapour in the equilibrium state is achieved at the surface between vapor of the liquid and liquid itself when a liquid is placed in a closed container. SVP is associated with Henry's law [4].
Henry's Law states that at a temp, the amount of a given gas dissolved in a given liquid is directly proportional to the partial pressure of the gas in equilibrium with the liquid. c) Heliox mixture of helium and oxygen, is a lighter gas, hence is used in airway obstruction to improve diffusion and gas exchange.
Fick's Law of Diffusion:
The rate of diffusion of a gas across a membrane is directly proportional to the membrane area (A) and the concentration gradient (C 1 -C 2 ) across the membrane and inversely proportional to its thickness (D). This results in effects known as the "concentration effect" and the second gas effect. When a constant concentration of an anesthetic such as sevoflurane is inspired with nitrous oxide, the alveolar concentration of sevoflurane is accelerated due to nitrous oxide, because alveolar uptake of the latter creates a potential sub atmospheric intrapulmonary pressure that leads to increased tracheal inflow.
Rate of diffusion a
Diffusion Hypoxia: Nitrous oxide diffuses faster from the alveoli at the end of anesthetic exposure, as N 2 0 diffuses faster into the alveoli thereby diluting the gases leading to fall in oxygen saturation, also known as diffusion hypoxia, therefore 100% oxygen is required at the end of surgery to avoid diffusion hypoxia.
Osmolarity
It is defined as the sum total of the molarities of the solutes in
Energy Mechanics
Heat Capacity: Heat Capacity is defined as the amount of heat required to raise the temperature of a given object by 1 kelvin.
Specific Heat Capacity
Specific Heat Capacity defined as the amount of heat required to raise the temperature of 1kg of a substance by 1 kelvin. (J /kg/
Clinical Relevance
Normal body temperature is 36 degrees Celsius and basal heat production is 80 W(J/Sec) Shivering increases heat production by 4fold (ie 320W, with extra 240W= 14.4kJ/min) 245kJ needed to increase temp by 1 degree (total heat capacity = 3.5x70kg), so patient has to shiver for approximately 245/14.4=17min to produce this extra heat.
Bernoulli's Principle
It is defined by the law of conservation of energy. Flowing liquid possess 2 types of enrgy-potential and kinetic energy. If there is a constriction in tube, there is increase in kinetic energy, there is subsequent fall in potential energy, to conserve the total energy [4].
The Venturi Effect
Venturi effect was named after famous Italian physicist, Giovanni Battista Venturi (1746-1822). It is the effect by which the introduction of a constriction to fluid flow within a tube causes the velocity of the fluid to increase, therefore, the pressure of the fluid to fall. By measuring the change in pressure, the flow rate can be determined, as in various flow measurement devices such as venturi masks, venturi nozzles and orifice plates.
a) The Venturi effect may be observed or used in the following: b) The capillaries of the human circulatory system, where it indicates aortic regurgitation. c) Injectors used to add chlorine gas to water treatment chlorination systems.
Spectrophotometry-Basic Concepts a) Beers Law
Beer law states that amount of light absorbed is proportional to the concentration of the light absorbing substance.
b) Lamberts Law
Equal thicknesses absorb equal amounts of radiation. Amount of light absorbed is proportional to the length of the path that the light has to travel in the absorbing substance. Both laws say that the absorption of radiation depends on the amount of a particular substance. This fact has been utilized in pulse oximetry. c) More the Hb per unit area more is the light is absorbed. This property is described in a law in physics called "Beer's Law". While, longer the path the light has to travel, more is the light absorbed.
This property is described in a law in physics called "Lambert's Law.
Glob J Anes & Pain Med
35 a) For the flow of blood in a blood vessel, the ΔP is the pressure difference between any two points along a given length of the vessel. When describing the flow of blood for an organ, the pressure difference is generally expressed as the difference between the arterial pressure (PA) and venous pressure (PV).
Law of Laplace (Wall Stress):
Laplace Law states that for cylinders, T = Pr or P =T/r (e.g. Arteries) For sphere, P= 2T/r (e.g. Anesthesia Bag/ Heart) (Where T = wall tension, P = pressure of fluid within the cylinder/ sphere, r = radius); Tension may be defined as the internal force generated by a structure.
Clinical Relevance
a) In a failing heart -there is an increase in radius therefore a decrease in pressure, and failing heart is unable to increase T. b) In a normal heart, increase in radius is beacuase of increase in venous return, also there is increase in Tension according to Frank starling law. Therefore there is no change in pressure.
c) The management of stable angina is to reduce wall stress thereby decreasing myocardial oxygen demand.
Archimedes' Principle
Archimedes' principle is a law of physics fundamental to fluid mechanics. It says any object, wholly or partially immersed in a stationary fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. a) Air Bubbles: According to Archimedes' principle, air bubbles always tend to go upward in any liquid, including saline, drugs, and blood. So, just keep up the cone of the syringe (exit path), bubbles can be removed by ejecting air by pushing the plunger of syringe. b) Cardiac Surgery: During cardiac surgery, de-airing is done before aortic de-clamping in order to prevent air bubbles from reaching the brain. If de-airing is performed through a ventriculotomy, the anesthesiologist is asked to place the patient in the Trendelenburg position, so that the venting site is located above and air expulsion is favored. c) Archimedes' principle helps cardiac anesthesiologists to prevent (or reduce) cerebral air embolism when air accidentally enters the circuits during cardiopulmonary bypass (CPB) by immediately placing the patient in steep Trendelenburg position.
Calculating the Duration of a N 2 O Cylinder
Just Now a new N 2 O cylinder is fitted to the Machine. How (Suppose the flow of N 2 O is 3Lt/m=180Lt/hr, so the cylinder will last for 1272/180= 07hr).
Unexpected Help from the Reservoir Bag
The reservoir bag in an anesthesia machine allows manual ventilation as well as a "visual" monitoring of spontaneous breathing. a) Thanks to Laplace's law, it can prevent barotrauma in case of malfunction or unintentional closing of the APL (adjustable pressure limiting) valve. In fact, in the presence of an overflow or a flow obstruction in the breathing system, the radius of the reservoir bag increases ( Figure 5) and, according to Laplace's law, the pressure inside it decreases (P=2T/R), thus preventing a dangerous rise in pressure in the entire breathing system and, consequently, in lungs. b) Accordingly, a reservoir bag which feels stiff should be replaced, since its wall tension (which we can define, similarly to surface tension γ, as the work required to extend the surface of an elastic membrane by a unit area will be higher, for the same radius (or its radius will increase by a lesser extent for the same value of wall tension), thus providing a lower "pressure relief".
Conclusion
Anesthesia has evolved very fast over last few decades but the basic are still same and applicable in day to day anesthesia instruments and apparatus. It is necessary to understand the basic physics behind every anesthetic instrument, so that it becomes easy to operate. Learning conceptual physics also helps to trouble shoot the problem associated with them. | 2019-07-26T14:45:54.958Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "02ddd80234ceb2af94fc84518e298c931e1e55b8",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/anesthesia-pain-medicine-journal/pdf/GJAPM.MS.ID.000107.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b59f0db6bf19d68c45253f23ded1566eb8837efc",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": []
} |
15297766 | pes2o/s2orc | v3-fos-license | Boundary states as exact solutions of (vacuum) closed string field theory
We show that the boundary states are idempotent B*B=B with respect to the star product of HIKKO type closed string field theory. Variations around the boundary state correctly reproduce the open string spectrum with the gauge symmetry. We explicitly demonstrate it for the tachyonic and massless vector modes. The idempotency relation may be regarded as the equation of motion of closed string field theory at a possible vacuum.
Introduction
Study of the off-shell structure of string theory is an essential step in understanding its non-perturbative physics. In recent years, Witten-type open string field theory [1] has been intensively examined in this context. One of the goals is to understand D-branes as soliton solutions of open string field theory. One of the promising discoveries was that the energy of the tachyon vacuum correctly reproduced the tension of D-branes at least numerically. [2] Inspired by the experiences of noncommutative field theory, it was conjectured by Rastelli, Sen and Zwiebach that the D-branes may be understood as the solutions to the projector equation, where ⋆ is the noncommutative and associative Witten-type star product for an open string field. It was conjectured that this equation may be understood as the equation of motion of a string field expanded around the tachyon vacuum (the so-called vacuum string field theory (VSFT) conjecture [3] [4]). In particular, a few examples of the projectors, the sliver state or butterfly state, were examined as the candidates which describe the D-brane.
It turned out, however, that the treatment of D-branes in open string field theory is very delicate. One of the difficulties was the description of the closed string sector. In Witten-type open string field theory, the action does not include the closed string degrees of freedom at the tree level. If we need to describe them in open string language alone, we have to consider a singular state such as identity string field where the closed string vertex is inserted at the midpoint. [5][6] [4] The midpoint in open string field theory causes many subtleties, for example, it causes the breakdown of the associativity [7] and we have to be very careful while handling such a degree of freedom. 1 D-brane couples to the closed string sector (for example, gravity) at the tree level, and we cannot escape from using such a singular description. The level truncation regularization seems to handle it numerically. However, the analytic treatment of the problem remains as a real challenge.
In this paper, we change the viewpoint and start the analysis of D-branes in closed string field theory.
We believe that such a treatment is natural since the nature of D-branes is most precisely encoded in the boundary state |B which lives in the Hilbert space of the closed string sector. In particular, we will prove that the boundary states (both for Neumann and Dirichlet boundary conditions) satisfy an analogue of Eq. Unlike the open string version, Eq.(1.2) has a natural geometrical meaning. The boundary state, as suggested by its name, describes the boundary condition of the string world sheet. Suppose there exist two holes with the same type of boundary condition. If we merge these two holes by a closed string star product, we expect to have the same boundary condition on the new hole. To demonstrate this observation explicitly, we have to be specific about the choice of the star product. There are three candidates of closed string field theory which were well examined so far.
The oldest one is the light-cone gauge approach [9]. This is consistent in the sense that it produces the correct integration range over the moduli parameter. However, for the application to our problem, it is not useful since the boundary states have nontrivial dependence on the time coordinate. We need covariant descriptions.
The second one is the closed string version of Witten's open string theory. A generalization of Wittentype midpoint interaction vertex to closed strings results in nonpolynomial string field theory [10,11] 2 . The action contains infinitely many terms to cover the moduli spaces for the Riemann surfaces corresponding to various interactions. This approach contains many mathematically interesting features such as L ∞ structure. Handling of the moduli parameters still remains as a challenge, however, and it has not reached the completely satisfactory level.
The third one is based on a split-joining type vertex, which was proposed about the same time as Witten-type open string field theory and is now known as HIKKO's (Hata-Itoh-Kugo-Kunitomo-Ogawa) string field theory [13,14]. It has exactly the same action as Witten's open string field theory, namely, the kinetic term and a three string interaction. 3 In HIKKO's theory, it is necessary to introduce a parameter called string length α to specify string interactions, which has no analogue in Witten-type string field theories. It must be integrated in computing physical quantities and might cause a divergence in loop amplitudes [15]. The simplest way to resolve this difficulty is to just set α = p + , but it breaks the covariance.
To summarize, there is no completely satisfactory closed string field theory. In this paper, we adopt HIKKO's star product to explicitly demonstrate Eq.(1.2). However, we expect it to hold even if we replace it with a Witten-type product. We will come back to prove it in our future paper [16]. We would like to propose this relation as a universal characterization of the boundary states in closed string field theory, which is independent of the specific proposals for the action. A merit to use HIKKO's approach is the analogy of the action with Witten's open string field theory. If we want to have an analogy with VSFT proposal, this gives a good reason to start from it.
We note that HIKKO's * product in Eq.(1.2) has different properties compared with Witten's star product in open string field theory. It may be summarized as the following relations: First of all, the product is (anti-)commutative (1.3). While it breaks associativity, it satisfies the analogue of Jacobi identity (1.4). In a sense, it has the same property as the commutator of Witten-type open string Since the nature of the product is different, we cannot interpret the equation (1.2) as defining a projector. In the following, however, we will continue to use the word "projector" to describe the state that satisfies Eq.(1.2) because of the similarity with the discussion of the open string.
We conjecture that Eq.(1.2) gives a good characterization of the conformal invariant boundary. For this purpose, we calculate an infinitesimal variation of the boundary state of the following form, where V (σ) is a vertex operator inserted at the boundary. We argue that the idempotency condition (1.2) requires the vertex V to be marginal. We will prove this expectation for the tachyonic state and the massless vector state. For such variations, this gives the mass-shell condition for these open string modes. In a sense, the idempotency condition knows the mass-shell condition of the open string while they are the equation for the closed string states! We note that our argument is very similar to the discussion of vacuum string field theory. For example, use of the variation of Eq.(1.2) to derive the mass-shell condition for the open string states was examined in the VSFT context by Hata-Kawano [17] and Okawa [18]. In particular, in the latter approach, the marginal deformation was made over the whole boundary. This is basically the same variation as Eq.(1.6). The difference is, of course, the Hilbert space where the projector lives. In VSFT, to describe such an projector, we have to consider singular states. For example, the sliver state is made by taking the infinite star products of the vacuum state. On the other hand, our closed string description does not include such a singular manipulation. The boundary state is a well-defined state in the boundary conformal field theory. In this way, we can escape from the subtleties of VSFT.
The paper is organized as follows. In section 2, we give the explicit definitions of the boundary states and the 3-string vertex which are discussed in this paper. We will then present our claims more precisely. The proof is given explicitly in the following sections which are rather technical. In section 3 we prove the idempotency relation of the boundary states. We need many properties of the Neumann coefficients which are summarized in appendix C. In section 4 we investigate infinitesimal variations around the boundary state and derive on-shell condition of open string on them. In section 5, we discuss some issues of our results.
2 Boundary state and star product of closed string field theory
Boundary states
The boundary states |B(F ) which we are going to discuss are those for Dp-branes with constant field strength F µν [19], 3) x µ (µ = 0, 1, · · · , p) are the coordinates along the Neumann directions and x i (i = p + 1, · · · , d − 1) are along the Dirichlet directions 4 . We use the letters M, N (= 0, · · · , d − 1) to represent all these directions. We put d = 26 since we are considering bosonic string theory. These states satisfy the following conditions: The boundary states are invariant under BRST transformation
Reflector and 3-string vertex
HIKKO's star product for the closed string is a covariant version of light-cone string field theory. It is defined by the reflector R | which maps a ket vector to a bra vector and the three string vertex |V (1, 2, 3) which lives in the tensor product of three closed string Hilbert spaces : 8) 4 We summarize our notation of the oscillators and the vacuum state in appendix A. In particular, we usec to denote the anti-ghost (usually written as b) by following HIKKO's convention. [13,14] For ghost zero mode convention, we use π 0 c -omitted formulation (section VB.in Ref. [14]). 5 This property is essential to couple the boundary state as the external source to closed string field theory. The authors of Ref. [20] proposed such an action namely, QB|B(F ) = 0 is necessary to satisfy the gauge invariance of Stot. This was the first example where the boundary state appeared essentially in closed string field theory. They used this action to derive open string action (Born-Infeld action) and proved their gauge invariance through string field theory. An unsatisfactory point was, however, that one needs to put the boundary state by hand from outside. Our study starts from a hope to derive it within the framework of closed string field theory.
The reflector is defined by [14]: The 3-string vertex |V (1, 2, 3) is given explicitly in terms of oscillators as 6 rs mn mN ss m−n,n , (2.14) The coefficientsÑ rs mn are Neumann coefficients;Ñ rs mn := √ mN rs mn √ n ,Ñ r m := √ mN r m . Their definitions and some formulae which they satisfy are given in appendix C. ℘ (i) is a projector to impose the level matching condition N + = N − on the ith string. Note that we can rewrite some of the above as in the presence of δ-functions which impose p 1 + p 2 + p 3 = 0 and α 1 + α 2 + α 3 = 0.
Main results
At this point, it is possible to make a precise statement of our results, given as follows.
1. We slightly redefine the boundary state |B(F ) (i) by multiplyingc 0 to obtain the correct ghost number of closed string field in the physical sector in gauge-fixed action [14] and (ii) by including the string-length parameter (α parameter) We claim that it satisfies the following relation ("projector equation" with the ghost insertion): The matrix r is given by, c depends on the ratio of α parameters. 7 2. We consider the infinitesimal variations of Φ B of the following form: 8 The first (second) one corresponds to the tachyonic mode (vector particle) of the open string. The infinitesimal variation of Eq.(2.26), gives the following constraints: is the "open string metric" on the Dp-brane. These are precisely the mass-shell conditions for the tachyon and the vector particles.
The other part of the physical state conditions for the vector particle, the transversality condition k ν G νµ ζ µ = 0, becomes rather subtle. At the level of the "equation of motion", the coefficient of this factor takes the form 0 × ∞ and we can not make a definite statement without more precise knowledge of the regularization scheme.
We note, however, that the variation (2.32) is invariant under the gauge transformation; namely, if we change ζ µ → ζ µ + ǫk µ (2.36) 7 While we have not succeeded in determining it analytically, we can numerically evaluate it by truncating the matrix r to L × L. We find that a good fit of this coefficient is At L = 100, the error is about ±0.02. This estimate shows that c/L 3 is a finite and well-behaved function of β. 8 Normal ordering which is necessary here is defined in appendix D.
in Eq.(2.32), δ V |Φ B (α) is not affected at all since the change can be written as the total derivative with respect to σ and it drops out after the integration, In this sense, the gauge symmetry is automatically encoded in the vector particle.
One may give an intuitive proof of the projector equation Eq.(2.26). We note that the boundary conditions (2.4-2.6) for |B(F ) and the overlap conditions (2.20-2.22) for |V (1, 2, 3) are the local requirements on the boundary, namely they are defined for each σ. Therefore, if we impose the same boundary conditions for |B 1 and |B 2 , they are translated into the same boundary conditions for |B 1 * B 2 of the corresponding point. Since the boundary state can be determined from the boundary conditions up to the normalization, |B 1 * B 2 must be proportional to the same boundary state. More explicit proof of this identity in terms of the Neumann coefficients becomes, as we see below, rather lengthy while it is mostly straightforward. We have to use many nontrivial identities of the Neumann coefficients. In this sense the computation illuminates a special rôle played by the boundary state.
Proof of the idempotency of the boundary states
In the following sections, we give the technical details of the proof of Eqs.(2.26, 2.34). We first derive the star product of the boundary state which includes the additional linear term in the exponential. It will be used to give the source term to derive the variation of the boundary state.
We consider a tensor product of the boundary states, where we used abbreviated notation, We note that O g = 1 for the conventional boundary state. We include this extra degree of freedom since there exists another choice O g = −1 which satisfies the projector equation as we will see later.
The corresponding bra state is obtained by applying the reflector and projectors, where 9 We take the inner product between this state with the 3-string vertex (2.11). For this purpose, it is convenient to rewrite the factor in the exponential as where we introduced some notation, By taking the inner product with the aid of the useful formulae Eqs.(B.2),(B.4), in the appendix, we can arrive at the following general formula after some calculation, 10 14) In the derivation of this formula, we do not use the information of the particular form of M . In this sense, this gives the general formula for the star product of the generic squeezed states of the form (3.1). 9 The elements of M, Mg do not change by ℘ because of the form (3.3). 10 In this expression and in the computation in the following, we omit the suffix (3) in the oscillators. The vector a † should be interpreted as (a The same convention is also applied to c andc. This expression looks hopelessly complicated. In particular, the appearance of the inverse of Neumann coefficients, (1− N M ) −1 or (1+ N g M g ) −1 in H m and H g , looks unmanageable and even singular for generic M .
A major simplification occurs, however, when we replace the matrices M, M g with those of the form (3.3). In this case, one may use and similar one for the ghost sector. We note that O and n commute with each other since the matrix O acts on the Lorentz indices while the Neumann coefficients acts only the level index. The problem is reduced to deriving the inverse (1 − n 2 ) −1 . At first look, this is singular since the relation (C.5) among Neumann coefficients implies On the left hand side, the size of the matrix with respect to the indices {(r, n), (s, m)} is "2 × ∞" whereas the summation on the right hand side is taken over "∞" set. If we naively regularize the Neumann matrices N r3 ,Ñ 3s (r, s = 1, 2) by truncating their size to L respectively, the rank of (1 − n 2 ) becomes L while its size is 2L.
It is a surprise that, contrary to this naive expectation, it has a well-defined inverse. This is a specialty of the infinite dimensional matrices. For the explicit computation, we need detailed forms of the Neumann coefficients A (r) and B describe the overlap of Fourier basis of three strings at the vertex. A crucial property of A (r) (r = 1, 2) is that they have an inverse, which was proved in [21], By using this inverse, one obtains the inverse of 1 − n 2 , With this remark, we derive the following relations which are essential to show the idempotency of the boundary state,Ñ 33 + r,s,t=1,2Ñ We list many other formulae for Neumann coefficients in appendix C.
In the next section, we will need cut off the range of the lower indices of the Neumann coefficients to obtain a finite result. We need impose the condition that ℓÑ r3 pℓÑ 3s ℓq (3.19) has the inverse as mentioned above even in the finite size truncation. This observation will have an important consequence later. Now we come back to the computation of * product. By using the relations (3.25), we can simplify the gaussian part of Eq.(3.13). We neglect the λ dependence for the moment since they are not relevant in the proof of the idempotency of the boundary states. The exponents (3.15) and (3.16) are for the matter sector and for the ghost sector. These expressions are further simplified by the following conditions which are satisfied automatically for the conventional boundary state (2.25). After we use these relations, we arrive at the final result, The ghost prefactor C in Eq.(3.13) becomes It is simplified further for O g = +1, and for O g = −1, where σ 1 = π , σ 2 = 0 , σ 3 = π(β + 1) . (3.35) After computing, the normalization factor c is The second factor is simplified by det(4( 2 r=1 A (r) A (r)T )Γ −2 ) = det(4(Γ − 1)Γ −2 ). After we use the expressionÑ 33 = 1 − 2Γ −1 and the relations (C.1-C.3) in appendix C, we obtain Eq.(2.28).
Thus far, for the string field we have derived with nonvanishing momentum only along Dirichlet directions 11 where c, C ± are given by Eqs.
(3.38) as an analogue of the equation of motion of VSFT, the string field |Φ 0 with O g = −1 corresponds to Hata-Kawano's "sliver-like" solution of VSFT. [22] On the other hand, in the case of O g = +1, |Φ 0 corresponds to "identity-like" solution of VSFT [23] with the same analogy of C + = ∂ ∂c 0 ∼ c 0 . Although there are two choices for ghost sector: O g = ±1 for Eq.(3.38), only |Φ 0 with O g = +1 relates to the boundary states (2.1) which have conventional BRST invariance. In the following section, we discuss only |Φ B (2.25) with O g = +1.
Fluctuation around projectors
In this section we consider two types of fluctuations Eqs.(2.31,2.32) around |Φ B and demonstrate explicitly that the idempotency condition (2.33) produces the on-shell conditions (2.34) for these particles. We note that the variations of the type correspond to the open string modes on the D-brane (see, for example [19]). We conjecture that the "equation of motion" Eq.(3.38) will produce the on-shell condition for all of them, namely, they should be the marginal deformation on the boundary. We pick the simplest two examples to illustrate this idea explicitly.
Before we start the computation for these cases, we give a few technical remarks.
1. By using Eq.(3.13) with nonzero λ as the generating functional, we will compute the left hand side of Eq.(2.33). Explicitly, for a fluctuation we can compute * product as and |Φ B * δΦ B similarly.
2. From the definition of the tensor product of the boundary states, Eq.(3.1), we can define projection matrices P ± as 12 which satisfy P 2 ± = P ± , P T ± = P ± , P ± P ∓ = 0 , P + + P − = 1 , 3. λ-dependent terms of the exponent in Eq.(3.13) can be simplified by using the identities of Neumann coefficients (appendix C.3), 13
Tachyon type fluctuation
We consider tachyon type fluctuation of the form Eq.(2.31). After we use the identification of the oscillators on the boundary state (D.1-D.3),(D.13), the variation takes the following form, We need some explanations on our notation. The bra vector (cos nσ) (or (sin nσ)) has only the level index n and whose n-th component is cos nσ (or sin nσ). In this notation, we may also write, for example, (cos(nσ)/ √ n) ≡ (cos(nσ))C −1/2 and so on. The other bra vector (1 , 1) has the index ± which distinguishes the left and right movers. Finally P ± have the indices of Lorentz and left/right ± as was defined in Eqs.(4.4),(3.3).
The integration with respect to σ appears automatically because of the projection ℘ in the definition of the * product (3.13).
We investigate "on-shell" condition which is imposed by Eq.(2.33). We evaluate the * products e E 2 e c (+) †c(−) † +c (−) †c(+) †c 0 |k µ , x i , α 1 + α 2 . (4.12) 13 We denote λ (r)θr = (λ (r)(+)θr , λ (r)(−)θr ) = (e −inθr λ The calculation of E 1 , E 2 is reduced to that of H m in the previous section, Eq.(3.15). We have already evaluated first four terms while we need to keep the nontrivial k dependence. The last two terms are simplified in Eqs.(4.6),(4.7). In the computation of E 1 we put λ (2) = 0. 14) The quadratic part in the oscillator E 1 , we need to evaluate the inner products between the vectors (cos nσ)C −1/2 or (sin nσ)C −1/2 with matrices A T (r) and D (r) . They are reduced to the calculation of Fourier transformation which we explain in detail in Appendix C, Eqs.(C.30-C.46). They simplify the linear part dramatically to This is identical to the linear part coming from the tachyon vertex. The constant part is similarly computed, as follows: cos m(σ + θ 1 ) cos n(σ + θ 1 ) The overall factor of E G µν is "open string metric" on the Dp-brane.
We evaluate the numerical factor [· · · ] in Eq. (4.18). The quantities in the first line are convergent. On the other hand, the evaluation of the terms in the second and third lines are very subtle. Two terms with δ mn can be summed to give ∞ m=1 1/m which diverges logarithmically. The summation of the other two terms, if we first perform ∞ m,n=1 using Eqs.(C.30),(C.31), gives again − ∞ p=1 1/p, which is divergent but with negative sign: cos m(σ + θ 1 ) cos n(σ + θ 1 ) The summation of the first two terms are finite and exactly cancel with the first line of Eq.(4.18). As for the third term, we encounter subtle cancellation of the form [· · · ] = ∞ m=1 1 We need some regularization to obtain a finite result. 14 For this purpose, we cut off the infinite dimensional matrix for A (r) . As we commented in the previous section, in order that A (r) has the inverse D (r) in the sense of Eq.(3.23), we should regard A ). With these combinations, the relation Eq.(3.23) becomes simply AD = DA = 1. In the cut-off regularization, we demand A, D to become L × L matrices with a large integer L. It implies that the sub-blocks A (r) become rectangular matrix with size L × L r with L 1 + L 2 = L. In the following, we demand An explanation of this division is to come back to the definitions of A (r) summarized in the appendix (C.1). The first lower index p (resp. the second lower m) in A (r) pm labels the Fourier bases cos(pσ/α 3 ) (resp. cos(mσ/α 1 ) or cos(m(σ − πα 1 )/α 2 )). Cut-off of the label p by L is equivalent to discretizing the world sheet parameter σ to L points. Through the overlap given by the vertex, there exist L 1 = α 1 α 1 +α 2 L = −βL (resp. L 2 = α 2 α 1 +α 2 L = (1 + β)L) points on the first (resp. the second) closed string. While this reasoning may look weak, it turns out to be the unique choice which correctly produces the open string spectrum including higher modes.
With this regularization, we obtain We have obtained a very compact result for E 1 , We can derive E 2 similarly, (4.24) 14 There was a similar subtlety of the tachyon mass around the sliver solution in the oscillator approach of VSFT which was proposed in [17]. As was shown in [24], [25], the correct mass was reproduced using a regularization of the Neumann matrices although it becomes divergent if one uses relations among them naively.
Vector type fluctuation
Next, we consider a vector type fluctuation of the form of Eq.(2.32), which after using the properties of the boundary state (D.14), is equivalent to 27) where and d σ is given by ζ µ as We can compute the * product of δ V Φ B and Φ B using the technique of Eq.(4.3): where λ σ is given by Eq.(4.9).
There are three terms which contribute to D 1 : The main contribution comes from the first term: (4.32) We show the details of computation and other terms in appendix E. Similarly, we obtain D 2 in Eq.(4.30) as Noting the integration over the interval 2π which is caused by projection ℘, the sum of Eqs.(4.29) and (4.30) becomes The remaining finite terms [· · · ] are given in Eq.(E.8). From the first line we have obtained the on-shell condition for vector type fluctuation (4.27): The interpretation of the second line is more subtle. While the prefactor vanishes when (4.35) is imposed, ζ µ G µν k ν also has a divergent coefficient p sin 2 (pπβ)/πp. In the regularization scheme we have used so far, we can not make a definite statement whether this coefficient as a whole vanishes or not. In this sense, it is not clear whether the idempotency relation implies the transversality condition ζ µ G µν k ν =0. We note that, as we have already commented, the vector type deformation has the gauge symmetry (2.37) which is the correct feature of the gauge particle.
It may be of some interest to compare it with the analysis in VSFT [17]. While our variation δ V Φ B has a unique form of d σ (4.28), its counterpart d µ n (Eq.(4.29) in Ref. [17]) of VSFT was arbitrary. Actually they are all gauge degrees of freedom in VSFT except for one [26]. As for the ordinary gauge transformation, it was reproduced only after using regularization [27]. While we have discussed a close analogy of our analysis with VSFT, the gauge structure is very different.
The analysis of higher modes is more complicated because of the treatment of the interaction point. However, the leading term for the level n perturbation δ n has the following simple structure, which gives the correct mass-shell condition for such vertices 1 2 k µ G µν k ν = 1 − n . On the other hand, the cancellation of the contributions from the interaction point will give a very nontrivial test of our scenario that the idempotency condition of closed string field would give the correct spectrum and symmetry of the open string.
Discussion
We have seen that the "vacuum version" of closed string field theory embodies the basic goals of the VSFT proposal; namely it has a family of exact solutions that correspond to various D-branes. In our case, all of the basic types of the boundary states in the flat background (Dp-brane with the flux) appear as the exact solutions. Furthermore, the infinitesimal variation of the solutions produces the correct spectrum of the open string living on the D-brane (at least lower lying modes) with the correct gauge symmetry.
What is the "vacuum version" of closed string field theory? Resemblance of the action of HIKKO's string field theory Witten's open string field theory is one of the encouraging points to suspect the existence of such a theory. The computation of the tachyon vacuum is parallel to the open string case [2] and will be possible at least numerically. As in the VSFT proposal, one may conjecture that the equation of motion at this "vacuum" may be written as Eq.(2.26). We have observed the close analogies of the structure of the pure ghost kinetic term at the end of section 3. At this vacuum, there would be no propagating degree of freedom both in the closed and open string sectors. There exist, however, the nonperturbative solutionsthe boundary states.
It is tempting to conjecture that, as in the VSFT proposal, the re-expansion of the theory around the solution produces open string field theory. By the assumption of the vacuum theory, there is no closed string propagation at the tree level. On the other hand, the open string becomes physical. The BRST charge at the new vacuum would be which is formally nilpotent by the Jacobi identity.α 2 factor in front ofc 0 derivative is needed to make this part a derivation. Here we need to take the limit α → 0 or ∞ since α parameter is preserved by the star product. More detailed examination of this scenario will be presented in the future study.
The use of the closed string degree of freedom has a definite advantage in describing the physical process involving the D-brane, for example, in the time-dependent solutions of the D-brane decay. In such a situation, the rôle of the closed strings seems more important than the open strings [28]. If we use the open string fields alone, the treatment of closed strings becomes singular while in our approach it is encoded as the fundamental degrees of freedom. Of course, to proceed in this direction, we need to understand how the propagating degrees of freedom appear in the closed string sector, which would be the most important issue in our proposal.
We describe our scenario as a possible physical interpretation of the idempotency equation of the boundary states. We do not deny the other possibilities at this point. Since Eq.(2.26) is a mathematically rigorous statement, it will play a fundamental rôle even if our scenario might not be so accurate.
Finally we have obtained the boundary states which satisfy the idempotency relation, Eq.(2.26), rather than the equation of motion of full string field theory, For example, in [29], it was argued that the asymptotic behavior of 1 Matter zero modes are represented byâ 0 ,â † 0 aŝ and their eigenstates are given by Similarly α-dependent part is treated as an analogue of matter zero modes, On the ghost zero mode, the bra-ket convention is
B Gaussian formulae
In string field theories using oscillator representation, we often encounter computations of the form e aM a e a † N a † |0 . We show useful formulae for this type computations. These are proved by inserting coherent states and performing Gaussian integration.
For the matter sector with bosonic oscillators [a m , a † n ] = δ mn , a n |0 = 0, n ≥ 1 , (B.1) we have where M, N are symmetric matrices.
For the ghost sector with fermionic oscillators In particular, if there are no terms dependent on zero mode, the above formula is simplified as ∆ = 1 , E 0 = 0. In the computation of Eq.(3.13), we use it for α = µ = ν = γ = 0 case.
C Relations among Neumann coefficients of light-cone type SFT C.1 Definitions of Neumann coefficients
Neumann coefficients are used to define 3-string vertex |V (1, 2, 3) which represents connection conditions of string world sheets and encodes string interactions.
In Eq.(2.11), we used light-cone type Neumann coefficientsN rs mn ,N r m which are explicitly given by [21][13] [14]:N rs We also use the notationÑ It is convenient to rewrite them using matrix representations as where, for |α 1 | + |α 2 | = |α 3 |, they are given by Here, we used the notation We note that −1 < β < 0 in this case.
C.4 Some formulae associated with (cos nσ), (sin nσ) and A (r) , D (r) We list some more formulae which we use in computations in §4.
For the interval −π < σ < π, For the interval −π < π − σ < π, We used ∂ σ X above instead of ∂ τ X because we consider "open string vertex" in terms of a closed string.
We note that there are no excitations along Dirichlet directions on |B(F ) in Eqs.(D.13),(D.14).
E Computation of vector type fluctuation
Here we present details of computations in §4.2. | 2014-10-01T00:00:00.000Z | 2003-06-19T00:00:00.000 | {
"year": 2003,
"sha1": "bcbf36c6cbb389340aadac0b1dc47b6b6011ff1d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0306189",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aad4d6672a97779e78c68e631a1ffcb138536bbc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222412781 | pes2o/s2orc | v3-fos-license | A Conservation of Resources schema for exploring the influential forces for air-travel stress
Effective air-travel stress management is increasingly crucial in determining tourist satisfaction and travel choices, particularly in a time of intensive fear about virus, terrorism, and plane crashes. However, research about air-travel stress, particularly what and how various influential forces shape passenger stress levels, is still in its infancy. The current research proposes the adoption of Conservation of Resources (COR) theory as a holistic schema to identify through resource dynamics the potential influential forces for air-travel stress across leisure travel stages. The findings, based on surveying passengers at the gate of multi-country international and domestic airports, demonstrates the capability of COR schema to predict and explain the influences on air-travel stress from an array of personal and situational/trip-specific factors. The theoretical advances from COR-based cross-stage stress analyses, and the guidance for customized airline/airport stress-soothing service strategies are discussed.
Introduction
"Businesses are suspending operations and airlines are halting flights […] People across the world have grown anxious about being in crowds or travelling in confined spaces like airplanes" (Holson, 2020).
An era of unprecedented challenges (e.g., the coronavirus outbreak, airplane safety concerns, and terrorism) has caused the travel industry to rely on the growing meaningfulness of travel to survive. Meanwhile, the industry has to deal with the declining tolerance among would-be customers towards travel stress (Villa-Clarke, 2020). Many businesses recognize the importance of facilitating a less stressful travel experience to boost the confidence in travel and regain faith in the industry once the current crisis absconds (Kinsman, 2020;Rabbu, 2020).
Travel stress is defined as ''the perceptual, emotional, behavioral, and physical responses made by an individual to the various problems faced during one or more of the phases of travel'' (DeFrank, Konopaske, & Ivancevich, 2000, p. 59). Despite the increasing recognition of its significance, the examination of travel stress in a leisure-travel setting has been largely insufficient and fragmented (Chen, 2017;Zehrer & Crotts, 2012). Exploring the underlying rationale of how forces shape leisure-travel stress is particularly in its infancy (Fennell, 2017). Filling this gap is important for leisure-travel marketing and management theory and practice because assessing the potential influences on leisure-travel stress and developing corresponding strategies to better alleviate the stress can increase tourist loyalty and travel frequency.
This study proposes a schema premised on the Conservation of Resources (COR) theory as a framework to identify potential influences on leisure-travel stress. To the best of our knowledge, this paper is the first to employ the COR to assess travel stress as a dynamic construct fluctuating over different leisure-travel stages. It uses a holistic approach to assess stress at each stage by accounting for the influences from other stages. An adaptation to the COR (specifically in its resource categorization) is also proposed to better fit the examination of short-term stress fluctuation. The anticipated offering is a systematic tool for more accurate leisure-travel stress interpretation and prediction. Specifically, the study demonstrates the COR-premised analyses of leisure-travel stress using a segment of a leisure tripthe air-travel stages (both departure-flight and return-flight) -as an example.
Recently, the air-travel industry has arguably encountered more challenges than many other tourism sectors. In addition to the health crisis, there are also rising labor costs, trade tensions, airspace restrictions, scrutiny of carriers' environmental footprints, as well as safety concerns due to aircraft accidents and terrorism (Harper, 2020;IATA Communications, 2019). These challenges squeeze profit margins resulting in the adoption of aggressive strategies to cut costs and enhance revenues (e.g., reducing leg space and charging for carry-on and checked luggage) (Whitley & Gross, 2019). While for some, air-travel has already been perceived as a stressful, unpleasant, but inevitable stage of leisure travel (McIntosh, Swanson, Power, Raeside, & Dempster, 1998), the aforementioned extra stressors further intensify the stress of the flight. For many travelers, this can compromise leisure-travel benefits for their well-being (Fritz & Sonnentag, 2006;Nawijn, Marchand, Veenhoven, & Vingerhoets, 2010). For the air-travel industry, it can ultimately result in less trust in the air-travel industry as well as reduced travel intentions and loyalty (Batouei, Iranmanesh, Nikbin, & Hyun, 2019;Dwyer, 2019;Lieberman, 2020). Therefore, the purpose of this paper is to unveil the mechanisms of air-travel stress to effectively alleviate it and regain customer confidence (Faraj-Dubz, 2020).
The travel literature has proposed a categorization of leisure-travel stress into three stages: pre-trip stress, travel stress (e.g., air-travel stress), and at-destination stress (Zehrer & Crotts, 2012). While pre-trip and at-destination stresses have been extensively studied (Gao & Kerstetter, 2018;Jonas & Mansfeld, 2017;Nawijn, De Bloom, & Geurts, 2013), the research on air-travel stress have nevertheless been scarce. The limited attempts have been primarily made to examine the potential stressors contributing to air-travel stress (McIntosh, 2006;Beck, Rose, & Merkert, 2017;Batouei et al., 2019). Further, there is little understanding of why and how people with different personal characteristics and from diverse contexts react to various air-travel stressors. We turn to the COR theory for guidance.
The lens of the COR allows the uncovering of underlying mechanisms of air-travel stress and facilitates the identification of its shaping forces. According to COR, the level of stress people feel is associated with the experienced or anticipated insufficiency/depletion of resources and the resulting lack of resources invested to cope with stress (Kuentzel & Heberlein, 1992;Schneider & Hammitt, 1995). This study thereby assumes that factors influential to such evaluation of existing stress-coping resources (depending on focal and previous travel stages) and future resources (upcoming travel stages) are potential determinants of air-travel stress. Instead of aiming for the most comprehensive set of predictors of air-travel stress, this study focuses on examining specific personal and situational factors related to stress that are theoretically derived from the COR framework and with data readily accessible to the air-travel industry. The latter is responding to the industrial strategic priorities of cost-effectiveness and high efficiency (IATA & SOIF, 2018).
This study empirically tests these identified factors for accuracy and identification consistency across contexts using passenger data collected at different travel stages from two international and two domestic airports located in Brazil and the United States. Such COR-based factor identification and empirical examination legitimizes the use of COR as a systematic and standardized travel-stress analysis approach and a valuable aid for strategic service management and marketing plans for travel-stress alleviation.
The specific research questions explored are as follows: 1 How accurately can the COR-premised schema predict the influences on air-travel stress (i.e., influential factors and patterns of influences) at air-travel stages (both departure-flight and return-flight stages)? 2 Are the schema predictions consistent across contexts varied by airtravel stressor type? 3 What factors among the easily accessible personal and situational factors influence air-travel stress?
Travel stress
According to the model of stress appraisal and response (Schneider & Hammitt, 1995) the stress people feel is essentially a person-environment transaction process. First, both personal and situational factors determine the individual appraisal of encountered conditions as stressors. This is followed by further appraisals of coping possibilities and eventually coping reactions. Individuals may alleviate the stress by changing their objective situation, their appraisal, or the way they react to the appraisal such as adopting certain coping strategies (Schneider & Hammitt, 1995). When applied to the leisure-travel setting, the situational factors may comprise the features of a specific trip (e.g., destination features and trip length); the personal factors can be an individual's generic features that can influence his/her responses to stressors (e.g., sociodemographic and travel habits) (Nawijn, 2011). Understanding these factors may thus be crucial for determining how travel stress builds up, and accordingly plan for stress alleviation.
The existing literature has examined a variety of stressors associated with leisure trips and the extent of resulting stress states such as fear, anxiety, worry, and anger (Larsen, Brun, & Øgaard, 2009;Ma, Ooi, & Hardy, 2018;Mura, 2010). Specifically, the stressors that most commonly stimulate air-travel stress are: the possible or actual occurrence of adverse events (e.g., delayed/cancelled flights, missing a flight, health and safety concerns, and long waiting periods for taking off), the irritating behaviors of other passengers (e.g., bringing aboard too much luggage, loud talking, crying baby, and demanding special treatment), and the unreliable/uncomfortable services delivered by airlines/airports (e.g., low problem-solving efforts, unclear information, and unpredictable security measures) (Bricker, 2005;McIntosh, 2017).
The investigation into personal and situational factors influencing leisure-travel stress is still in its infancy and has mainly focused on relevant constructs such as risk and fear rather than stress. A wealth of literature has paid attention to contributors to trip risk perception, an empirically established predictor of travel stress (Lopez-Vazquez & Marvan, 2003). The identified contributing factors contain personal features such as sociodemographic background (Floyd & Pennington-Gray, 2004;Tsaur, Tzeng, & Wang, 1997), cultural background (Reisinger & Mavondo, 2007;Vassos, 1997), lifestyle variables (Fuchs & Reichel, 2006;Maser & Klaus, 2008), personality (Lepp & Gibson, 2008), travel experiences/habits (Lepp & Gibson, 2008;Sönmez & Graefe, 1998), as well as situational trip-specific factors such as features related to a specific destination (Dey & Sarma, 2010;Fuchs & Reichel, 2010). Although these personal and situational factors influence risk perception, which is a potential predictor of travel stress, previous research has not established a direct relationship between these factors and air-travel stress.
Alternatively, Fennell (2017) provided an overview of factors potentially contributing to fear, an affective reaction to some travel stressors and indicates one facet of leisure-travel stress. The proposed factors include personal factors of socio-demographics, health and mental/physical skills, time/financial resources, responsibilities, opportunities, and personality. Further, trip-specific factors of economic costs, social/cultural features, environmental features, travel services, and media information were noted. These factors are nevertheless identified primarily toward the at-destination stage.
Yet, it is important to explore factors that influence stress at other travel stages (i.e., prior-trip, transportation, and after-trip stages) than at the destination. Stress levels of all travel stages matter as they jointly determine the overall trip stress level. Even a single-stage stress examination may not be accurate without considering the potential interdependence of stress levels between travel stages, as implied by the observed stress spillover between work life and leisure travel (Chen, Huang, Gao, & Petrick, 2018), and the extended stress relieving effect from a leisure trip to after-trip daily life (Chen, Petrick, & Shahvali, 2016). Given the between-stage connections, the previously addressed factors influential to at-destination stress indicator of fear may also impact stress at other travel stages, but possibly to different extents. For instance, de Bloom, Geurts, and Kompier (2013) demonstrated that travel activities influence mood, tension, and energy level (possible indicators of stress) for the after-trip period more than the during-trip period. It is thereby challenging to hypothesize the extent of effects from those at-destination influential factors on stress levels at other travel stages solely based on the at-destination evidence.
Finally, there is a need for an theoretical framework to explain the shaping forces of travel stress across leisure-travel stages. Kirillova, Lehto, and Cai (2017) proposed a rationale for only one facet of travel stress, anxiety toward losing gained authentic identity after a trip. Chen et al. (2018) adapted the work-family border theory by Clark (2000) to the leisure travel setting. The theory proposes that the poor management of work-travel border is the cause for insufficient work stress alleviation from travel. Yet, the authors focused on work stress alleviation rather than travel-stress build-up. The theory of stress appraisal and response by Schneider and Hammitt (1995) conceptualizes the stress level as determined by a) the primary appraisal judging the stressfulness of a situation, b) a secondary appraisal on what can be done about it, and c) the adopted strategies to cope with the situation. However, by conceptualizing stress as only in the eye of the beholder, it essentially lays the burden of stress coping on travelers who are expected to adopt the appraisal strategies to minimize their felt stress (Hobfoll, Halbesleben, Neveu, & Westman, 2018;Westman, Hobfoll, Chen, Davidson, & Laski, 2004). Also, while it allows the examination of how travelers' personal factors (micro-level) may interact with the environment (i.e., the stressors) in shaping the travel stress, it cannot identify the macro-level (i.e., global/environmental/socio-cultural) and meso-level (i.e., organization/community/group-wise) factors equally influential to travel stress (Korstanje, 2011). Moreover, while most existing stress-related explorations in leisure travel are post hoc in nature by examining perceived travel stress after a stressor occurs, the framework largely limits the industry's ability to forecast and prevent travel stress in reaction to stressors not yet occurred.
To overcome the shortcomings of the aforementioned theories, we introduce the Conservation of Resources theory as a promising overarching framework to detect the influential personal and situational factors to travel stress. The COR allows us to account for (1) diverse contexts and stress types, (2) different levels of stress-shaping factors, as well as (3) interdependence of travel stages. It also enables us to make predictions about possible influences before stressors occur.
So far, the application of the COR theory in tourism research has been scarce. The few existing attempts primarily explored the resource transactions between routine life and leisure travel in its entirety and without consideration of stress fluctuations or resource intricacies (see for example, Chen et al., 2016;Espino, Sundstrom, Frick, Jacobs, & Peters, 2002). Our study extends the previous findings by adapting the COR schema for a micro-level examination of stress at individual travel stages (using departure-and return-flight stages as examples). We also unveil the potential for a COR-based schema as a standardized and systematic framework to predict the underlying mechanisms for potential influences on travel stress of different types and stages.
Conservation of Resources theory
Conservation of Resources theory developed by Hobfoll (1989) suggests that the conservation of existing and acquisition of new resources is a major motivation for individual decision-making and actions. Depletion or insufficiency of resources, on the contrary, can lead to stress, emotional exhaustion, and destruction of wellbeing. Resources have been loosely defined in the existing literature, as "anything perceived by the individual to help attain his or her goals" (Halbesleben, Neveu, Paustian-Underdahl, & Westman, 2014, p. 1338. The theory establishes two major principles underlying people's behaviors: 1) primacy of resource loss: losses of resources are more harmful than similarly valued gains, hence people may try harder to avoid resource losses than receive gains and 2) resource investment: people are willing to invest resources to prevent resource loss, recover from losses, and acquire resources (Hobfoll, 2001). There are also two crucial COR corollaries of resource gain spirals-people with more resources or have been gaining resources are more likely to experience resource gains, and resource loss cycles-those with insufficient resources or have been experiencing resource losses are more likely to experience further resource losses as they become more defensive in how they invest resources.
Resources were originally categorized into four types: personal (i.e., demographics and personalities), condition (i.e., life statuses/roles), object (i.e., tangibles such as housing/transportation adequacy), and energy (i.e., time/effort). Later this typology was adapted into different versions, such as the isolation of social resources from condition category and the split between physical and psychological resources (Hobfoll, 2001;van Woerkom, Bakker, & Nishii, 2016). It remains an ongoing conversation on how to properly define resources and how the all-inclusive yet abstract categories can be used in practice (Hobfoll et al., 2018). For instance, it is challenging to assess the effect of each resource type on individual stress levels given the high heterogeneity within each type (e.g., energy incorporates mood, physical energy, cognitive energy, time, and so forth). It is also less meaningful to conduct cross-context comparisons for resource status/stress-shaping dynamics based on these broad categories.
Additionally, there is the demand for more COR research exploring the impact that resources play in shorter-term settings such as across days/weeks (Hobfoll et al., 2018). This study hence proposes an adapted resource typology with a focus only on the changeable resources in the short-term (i.e., over a leisure trip) as opposed to the unchangeable ones such as objects (e.g., possessions like housing) and conditions (e.g., marriage). The typology with empirically established changeable resources (Kammeyer-Mueller, Simon, & Judge, 2016; Lee & Ok, 2014) is as follows: physical resources (e.g., physical energy, health, budget, and time), affective resources (e.g., positive mood), cognitive resources (e.g., attention and memory), social resources (e.g., social status and support), and dispositional resources that determine the allocation of other resources (self-oriented-e.g., autonomy, self-efficacy/control, resilience, self-esteem, optimism, and social-oriented-e.g., trust, empathy, and patience).
The proposed focused categorization aims to be more applied as it directs the attention and effort towards the resources that travelers can alter to alleviate stress over the time of a trip. Each category is more homogeneous than categories in the original typology, which enables meaningful comparisons of resource status in each category across contexts. For example, the overall variation of cognitive resources across travel stages is much more meaningful than identifying the overall change in the original category of energy resources (which involves not only the cognitive ability and effort but also other heterogeneous components such as time and money). In addition, the proposed typology separates the cognitive and affective resources from the original energy category to account for the dominance of these resource types in stress buildup. The separation is based on the consideration that a) stress levels are determined by appraisal and reaction through the two completely different channels -cognition and affection, b) many other types of resources may shape stress via these two channels (e.g., knowledge resources reduce stress through less consumption of cognitive load, perceived high social status feeds the positive affection, etc.) (Jen-Hwa Hu, Han-fen, & Xiao, 2017;Tangney, Miller, Flicker, & Barlow, 1996), and c) cognitive and affective resources largely regulate many other resources' values, such as enhanced sensitivity to budget insufficiency under the cognitive overload (Deck & Jahedi, 2015).
Another important observation is that the potential to gain/consume a resource type varies by trip stage, which then leads to a cross-stage fluctuation of resource storage and stress levels. Some resources are of high demand at one stage but less at another stage. For example, consumption of physical/cognitive resources is relatively high at the trip preparation stage but can be low or even replenished over a relaxed resort stay. Similarly, if the at-destination stage is filled with adventurous physical activities, travelers are expected to gain more self-efficacy resources but expend more physical energy resources. (Hobfoll, Stevens, & Zalta, 2015)(p.176) further addressed the variation of context/setting in providing "safety and protection against resource loss". As the air-travel stage is commonly perceived as more-stressful and less-enjoyable than other trip stages, understanding its stress management is important. Effective stress management can potentially bring great improvements to travelers' resource protection/renewal and accordingly reduce their reluctance to travel.
In conclusion, following the COR rationale, this study suggests that resource adequacy (physical/cognitive/affective/social/dispositional) and accordingly the decision to invest/conserve resources at different air-travel stages will impact how travelers cope with stressors and the ultimately felt extent of air-travel stress. As opposed to previous attempts, this approach allows for a more accurate influence identification. It not only captures direct influences of a factor on resource dynamics at the focal stage, but also accounts for spillover effects from resource variations at other stages due to that factor. A visual demonstration of the conceptualization is depicted in Fig. 1.
It should be noted that throughout the various air-travel stages many types of resources may be beneficial in reducing air-travel stress. Examples are physical resources to handle the lengthy flight process, cognitive resources for information verification, affective resources for buffering the negativity from queueing or service problems, social resources for informational/emotional support, and dispositional resources to designate the above resources to be invested in handling various stressors. As there has been a lack of literature indicating the specific types of resources required for coping with each of various airtravel stressor types (e.g., incidents, fellow passengers, or service deliveries), it is presumed that each aforementioned resource type matters to air-travel stress levels. (Nawijn et al., 2010). With an increased trip length, their negative emotions are likely to increase (declining affective resources following the proposed resource typology), such as before-travel guilt about the postponed duties and upon-return worries given the anticipated duty overload (Mitas, Yarnal, Adams, & Ram, 2012;Nawijn & Damen, 2014). It is also common for these travelers to use more physical and cognitive resources for pre-travel planning and preparation given the increased trip length (Nawijn et al., 2013). Furthermore, if the trip length is not sufficiently long to anticipate or experience an adequate gain of physical/cognitive/affective resources to compensate for the experienced or anticipated losses of these resources, the increased trip length will likely increase air-travel stress at the departure/return-flight stage (de Bloom et al., 2010). Based on the COR corollary of resource loss cycles, travelers would conserve rather than invest resources in dealing with air-travel stressors. The conservation is in response to the greater before-trip resource exhaustion with the increasing trip length and insufficient anticipated/actual resource restoration over the still-short trip. Subsequently, the resource loss cycle continues as the conservation of existing resources at air-travel stages (e.g., avoid communication with other passengers) can potentially result in more resource losses, such as anger felt towards other passengers (affective resource loss) and self-loathing due to misunderstandings (dispositional resource of self-esteem loss).
Moreover, travelers more likely gain existential authenticity-the awareness and behavioral alignment with the true self (Schlegel, Hicks, King, & Arndt, 2011)-from much longer trips (Brown, 2013;Kirillova et al., 2017), which fosters the high-level self-oriented dispositional resource of self-esteem (Goldman & Kernis, 2002;Heppner et al., 2008). The fostered self-esteem over the trip then motivates the investment of existing resources (e.g., physical/cognitive/affective) at least at the return-flight stage to sustain self-esteem gains, given the COR corollary of resource gain spirals, and the greater priority people designate to dispositional resources in resource investment/conservation decisions (Penney, Hunter, & Perry, 2011). For instance, people would invest affective resources in other passengers by showing compassion to others in order to harvest more resources in return (e.g., social support, positive mood, and in particular self-esteem).
To summarize, only when the leisure trip is "long enough" can the increased trip length be associated with increasing gains and diminishing losses of resources. Accordingly, the likelihood of resource investments at air-travel stages increases, resulting in lower air-travel stress. For trips not long enough, an increase in trip length leads to greater resource loss. The loss cannot be adequately recovered, resulting in an increase in air-travel stress. Yet, the threshold for a leisure trip to be defined as "long enough to restore sufficient resources" awaits to be explored. Accordingly, a quadratic relationship is hypothesized between trip duration and air-travel stress, such that within a certain number of travel days, air-travel stress increases over time. Yet after surpassing that duration, stress declines by days (H1a-b).
H1.
The air-travel stress levels at the departure-flight(a) and returnflight (b) stages are curvilinearly related with trip duration such that air-travel stress initially increases as trip duration increases; after surpassing a threshold the stress then weakens with the increase in trip duration.
Cultural and geographical distance.
A culturally more distant destination is likely to consume more physical and cognitive resources at the before-trip stage due to increased uncertainties (e.g., packing more supplies, conducting more research). It is also associated with the anticipated and actual consumption of more of these resources to explore the destination (e.g., more physical energy finding locations, more cognitive processing of novel information). These actual/anticipated resource losses can trigger the resource conservation tendency at the departure-flight stage and accordingly a higher stress level.
On the other hand, experiencing a culturally distant destination may enhance resource replenishment. This may be particularly the case for dispositional resources as the novel culture allows for a better detachment from daily routines and hassles (de Bloom et al., 2010). The detachment offers a greater opportunity to improve self-esteem through gaining existential authenticity (Kirillova et al., 2017), self-efficacy, and cultural intelligence (Frías-Jamilena, Sabiote-Ortiz, Martín-Santana, & Beerli-Palacio, 2018; Hirschorn & Hefferon, 2013). Following the COR principle of resource investment, the resulting sense of dispositional resource sufficiency activates resource investment at the return-flight stage to cultivate further gains of these dispositional resources. This aids coping mechanisms while reduces the air-stress level.
While cultural distance may have a depleting but also enhancing effect based on the air-travel stages, the geographical distance spanned by the travel may one-sidedly impact depletion. As travelers are likely to consume more physical and affective resources in long-haul than shorthaul flights due to fatigue (Flower, Irvine, & Folkard, 2003), an increase in geographical distance should increase air-travel stress.
H2a. The air-travel stress level increases with cultural distance between tourist origin and destination at the departure-flight stage.
H2b. The air-travel stress level decreases with cultural distance between tourist origin and destination at the return-flight stage.
H3. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages are higher among travelers travelling in greater flight distance.
Previous destination experiences.
With more familiarity with a destination, travelers should consume less cognitive resources before and during the trip (Heyman, Van Rensbergen, Storms, Hutchison, & De Deyne, 2015). Guided by the corollary of resource gain spirals, the resulting greater resource sufficiency then motivates the tourist to invest more cognitive resources in the air-travel problem-solving, which contributes to lower air-travel stress levels and potentially more gains of resources (e.g., positive mood).
H4. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages are lower among travelers with more prior visits to a destination.
Number of travel companions.
Given the COR principle of resource loss dominance, although travelers would gain social resources (i.e., social support) by travelling with a bigger group. They may find the associated variety of resource losses more salient and cannot be adequately compensated by the social resource gains. Accommodating the more diverse or even conflicting needs of a larger travel party in trip planning and onsite decision-making can cause the cognitive and affective resources to deplete (Dellaert et al., 1998). Even the higher-in-hierarchy compositional resources could be exhausted (e.g., empathy) or compromised (e.g., autonomy) (Petrides, Pita, & Kokkinaki, 2007). Travelers travelling with a larger party are, thus, more prone to conserve their resources during air-travel stages and accordingly become more stressed in reaction to any stressors.
H5. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages increase with travel party size.
2.3.1.5. Airport differences. Airport differences, particularly country differences (e.g., in a developing country versus a developed country) and status differences (i.e., international versus domestic airport), can cause variations in the extent of stressors (e.g., crowdedness and flight delay). Consequently, the extent of resource consumption to cope with stressors will likely differ across airports and result in a variation of stress levels.
H6. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages differ by country of airports (i.e., Brazil versus USA airports).
H7. The air-travel stress level at the departure-flight(a) and returnflight (b) stages differ by airport status (i.e., international versus domestic).
COR examination of personal factors
2.3.2.1. Travel frequency. Experienced travelers are not likely to consume many cognitive resources pondering on the uncertainties of upcoming trips. They could even benefit with a sense of self-efficacy gains (self-oriented dispositional resource) from rich travel experiences (Scarinci & Pearce, 2012;Valencia & Crouch, 2008). However, repeated consumption over frequent trips may possibly exhaust the physical resources and social-oriented dispositional resources (e.g., empathy and patience). This results in little additional acquisition or even a reduction of affective resources (e.g., lacking excitement and joy with the declining sense of novelty) (Eden, 1990). With more types of resources exhausted rather than gained, and following the COR principle of resource loss dominance, the more frequent travelers should conserve resources at air-travel stages and consequently experience more air-travel stress.
H8. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages increase with travel frequency.
Employment and job strain.
Employed travelers may experience a spillover of job strain during a leisure trip because their likelihood to think about or conducting job duties while on the trip (e.g., checking work emails using a mobile phone) (Chen et al., 2018), which occurs more frequently among those with high job strain. This results in their less actual/anticipated resource restorage (e.g., cognitive/affective/dispositional) over the trip. Travelers with jobs and particularly those with high job strain also feel the need to leave their jobs in good order before travelling, which can consume significant cognitive and affective resources (e.g., feeling guilty or anxious) (DeFrank et al., 2000). The before-trip resource depletion plus insufficient during-trip resource restorage can lead to greater departure-flight stress due to the resource-conservative motivation (principle of resource loss dominance). Upon return, the anticipated increase in resource consumption due to work overload (Mitas et al., 2012;Nawijn & Damen, 2014), in addition to the insufficiently restored resources during the trip, could further accelerate the resource conservation and cause higher-level stress at the return-flight stage.
H9. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages are greater for employed travelers than travelers currently without jobs.
H10. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages are greater for travelers with high job strain than those with lower job strain.
Age.
Older travelers are more likely to have less physical or cognitive resources at their disposal (Atkinson et al., 2005). Thus, they may direct less of these resources towards handling air-travel stressors, such as the expected lack of cognitive capability for problem-solving and regulating negative emotions in face of adversity. They may more likely suffer from higher air-travel stress levels due to inability/reluctance to invest resources at air-travel stages. For instance, they may have a harder time memorizing boarding information or controlling anger which causes escalated negative emotions.
It is also noteworthy that the older travelers are more prone to gaining existential authenticity from leisure travel (Kirillova et al., 2017), hence the greater likelihood to gain the self-oriented dispositional resource of self-esteem, which can motivate the investment of more physical, cognitive, affective, and social resources for more self-esteem gains and form resource gain spirals. However, as Gnoth and Matteucci (2014) posit, individual sense of existential authenticity can only be provoked when exposed to the more ideal version of self or at least "lifting one's head from the drudgery of every-day life" (p. 11). Such potential gain of self-esteem may not be anticipated and hence should not influence much the resource employment decisions at the departure-flight stage. Moreover, the gained existential authenticity more likely triggers the older travelers after-trip existential anxiety due to their greater "sensitivity to the incongruence between the newly acquired existential authenticity and everydayness" (Kirillova et al., 2017b, p. 22). Their fear of losing the replenished self-esteem upon return would trigger their inclination of conserving resources rather than investing in stressor coping at the return-flight stage; the COR principle of resource loss dominance. Therefore, we expect higher stress levels for older travelers regardless of air-travel stages or resource types examined.
H11. The air-travel stress levels at the departure-flight(a) and returnflight (b) stages increase with age.
Gender.
Females spend greater cognitive efforts on travel information searching and planning than males given females' more exhaustive and elaborative information searching/interpreting patterns (Kim, Lehto, & Morrison, 2007). Also, females are more social-oriented and thus tend to invest greater affective resources than males in providing social support to others (Tsiotsou, Ratten, & Sigala, 2010). This can exacerbate their consumption of other resources, such as cognitive resources, considering that a majority of before-trip planning and during-trip organizing tasks are undertaken by females (Fischlmayr & Kollinger-Santer, 2014). Such before-trip depletion and anticipated during-trip consumption of cognitive and affective resources can lead to a greater likelihood of resource conservation for females than males, thus increasing their stress levels more than males at the departure-flight stage.
During the trip, thanks to the more interdependent construal of self (opposed to the more independent self-construal by males), females can experience a greater improvement of existential authenticity and hence obtain a higher boost to self-esteem (Kirillova et al., 2017). The study also found that females' gained self-esteem is likely not to completely fade after return to everydayness, as their after-trip existential anxiety attributed to the loss of gained authenticity was not greater than males. Given the disproportional importance of dispositional resources (i.e., self-esteem) replenished at the destination, despite the females' consumed greater extent of cognitive/affective resources to males also at the destination, they should still have a better chance of experiencing lower return-flight stress levels than males. This is likely due to the multi-resource investment in air-stressor coping motivated by sustaining the self-esteem growth, following the COR corollary of resource gain spiral.
H12a. The air-travel stress level at the departure-flight stage is greater for females than males.
H12b. The air-travel stress level at the return-flight stage is lower for females than males.
Samples
Data was collected at airports in the USA and Brazil, with one international and one domestic airport selected for each country. Only participants on international flights were recruited from the gate hold area in the International Terminal E at Atlanta's Hartsfield-Jackson International Airport (ATL) and the International Terminal of Guarulhos International Airport in São Paulo, Brazil (GRU). Both airports were chosen given their well-known high passenger volume, with ATL the busiest in North America and GRU the busiest in South America (Zhang, 2016). Participants for the domestic flights were recruited in the United States from Columbia Metropolitan Airport (CAE) and in Brazil from Belo Horizonte (CNF). At different times during each day over two weeks, the participants were approached while sitting at the departure gate and asked to participate in a 15-min survey focusing on travel. Travelers appearing under the age of 21 were not approached. The survey collection at the gate not only ensured a high response rate because passengers were not in a rush but also provided a unique setting as travelers were experiencing the travel stress rather than having to recall or anticipate it. Travelers that indicated their purpose of the trip as primarily business were excluded from the dataset in our study. For airports in Brazil, participants had the option of completing the survey in English or Portuguese, which was back-translated by licensed translators.
The useable sample size is 1092 in total, with 28% (305) collected from ATL, 46% (497) from GRU, 5% (55) from CAE, and 22% (235) from CNF, hence around 73% are international travelers and 27% are domestic travelers. Moreover, 59% of the respondents were departing for the vacation destination, while 41% were returning home. 2% of respondents who travelled for longer than 90 days were removed from further data analyses as outliers, following a common practice in tourism studies (Alegre & Pou, 2006;Kang, Lee, Kim, & Park, 2018). The additional respondent information (e.g., demographics and travel features) can be found in Appendix I.
Measures
To examine the reliability of COR in explaining the air-travel stress mechanisms despite stressor types, air-travel stress was measured using a six-point Likert scale (0 = completely disagree and 5 = completely agree) developed by Bricker (2005), with three dimensions representing stress reactions to three types of stressors. The three dimensions are anxiety about irregular adverse events (8 items, e.g., "My body feels tense if my flight is delayed"), anger with other passengers (6 items, e.g., "I would feel resentful if I had to sit near loud/talkative passengers"), and mistrust in regular airline/airport service deliveries (8 items, e.g., "I sometimes think airline/airport personnel are unfriendly or unhelpful"). The reliability for each dimension is satisfactory, with the Cronbach's alpha value of 0.74 for stress toward irregular adverse event, 0.74 for stress toward fellow passengers, and 0.85 for stress toward regular airline/airport service deliveries, all exceeding the commonly accepted criteria of 0.7 (Nunnally & Bernstein, 1994). The average of item scores in each dimension was then denoted as the stress measure corresponding to that dimension.
For the measures of situational/trip-specific factors, geographical distance is measured using the proxy of flying distance between the airport and travel destination, using an online distance calculator (https://www.distancefromto.net). If a country rather than city is provided as the travel destination, then the flying distance between the surveyed airport and the most populous region of that country is calculated. Cultural distance between the tourist origin and the destination was calculated with the most widely adopted approach by Kogut and Singh (1988), which calculates a simple standardized quantitative measure of cultural distance between regions based on key cultural dimensions. In this study, eight cultural dimensions established by Torbiörn (1982) to conceptualize cultural novelty has been adopted for calculating the Kogut and Singh index, such as dimension of "everyday customs that must be followed" and "available quality and types of foods". The index values for eight dimensions were then averaged to obtain the cultural distance measure. Existing empirical studies have also documented the Kogut and Singh index calculated between global major regions, which renders this index readily accessible for airlines/airports to easily integrate into their databases. The remaining situational/trip-specific factors are measured using single questions. Trip duration is measured by inquiring how many days a tourist have spent or will spend on this trip; to measure previous destination experiences, participants were asked how many times they have been to the destination before; they were also asked about the number of companions they travel with on this trip; at last, they were also inquired about whether they were on the way home or just about to depart for the trip to identify the current air-travel stage.
The personal factors are generally measured with single questions, such as gender, age, employment status (dummy variable, based on that only currently employed travelers were asked to fill out the job strain scale while others left it blank), and travel frequency (indicated by the number of trips taken in the past year). The only exception is job strain that was measured using a six-point Likert scale developed by Warr (1990) (0 = never, 5 = all of the time). It assesses two dimensions, including the extent to which travelers felt depressed (gloomy, miserable) and anxious (tense, worried) while working in their job. The average of six item score was treated as the job strain measure (Cronbach's alpha = .89).
Data analyses
Hierarchical polynomial regression was adopted to test the hypotheses and empirically examine the proposed influences on air-travel stress. Based on the rule of thumb of 10 participants per independent variable (Hair, Black, Babin, & Anderson, 2014), the minimum sample size for the analyses is met by even the smallest sub-group for the regression analysis (currently employed sub-sample at the return-flight stage (N = 340)). In Step 1, the personal factors were entered into the regression model predicting air-travel stress.
Step 2, the situational/trip-specific factors were entered, including the linear term of trip duration.
Step 3 included the quadratic term of trip duration in the model, to represent the hypothesized curvilinear effect. The statistical significance for the quadratic term as well as a negative quadratic term indicating the inverted U-shape relationship provide support for H1a-b; the statistical significances for all other personal and situational factors are also expected to support the other hypotheses. The inflection point that indicates the trip duration bringing about the highest-level air-travel stress is also calculated following Weisberg (2005). To avoid the problem of multicollinearity, the standardized values of independent variables were used in each regression model (Aiken, West, & Reno, 1991). The normality, homoscedasticity, and multicollinearity assumptions were checked. Only the skewed IV measures (travel frequency, destination experience, and companion number) had to be logarithmic transformed (see original distributional statistics in Appendix I).
With some participants on their way to the destination, and others on the way back home, the represented trip stages by these two samples thus vary and represent the departure-flight versus return-flight stage. Given the hypothesized between-stage differences in the stress influence mechanisms, analyses were conducted separately for each stage (Research Q1).
The hypothesized influences drawn from the COR schema have been judged based on the generic concept of air-travel stress, given the absence of literature supporting resource dynamics shaping stress toward any specific air stressors. Research Q2 further examines whether these assumed influences remain consistent regardless of the context of stressor type triggering the stress. The influential factors (Research Q3) and the mechanisms across contexts are accordingly identified. The explored contexts are three types of stress corresponding to three stressor types (adverse unusual events, unpleasant behaviors of other passengers, failed regular airline/airport services). Each stress type is regressed on all theoretically identified personal and situational factors, with findings compared between stress types for consistency. Moreover, to capture the potential influence from job strain without compromising the statistical power estimating other factors, the models were conducted twice-among all travelers and among travelers with employment only that involves job strain measure. Appendix II presents the correlations between the variables.
Results
The results for the hierarchical polynomial regressions are listed in Tables 1-3, corresponding to the three air-travel stress dimensions. Additionally, we conducted the bootstrapping procedure (2000 samples) and examined the 95% confidence interval for each effect (Online Appendix A-C) to enhance the rigor of influence identification from using significance level alone (Gelman & Stern, 2006). Only the effects which are significant and with confidence intervals not containing zero are interpreted. The quadratic effect of trip duration in Step 3 for predicting the air-travel stress triggered by irregular adverse events (SAE; e. g., flight delay/cancellation, baggage loss/damage, miss/late for a flight) at the return-flight stage was statistically significant, for the entire samples (β = − 0.0003, p < .01) and employed samples alike (β = − 0.0002, p < .05), supporting H1b only. The negative sign of the quadratic effects supports the inverted U-shape relationship between trip duration and return-flight stress. The inflection point is calculated as 11.75, indicating that as long as the trip duration is shorter than 12 days, the longer the trip is, the more SAE a tourist feels upon return; yet once their trip exceeds 12 days in length, the longer they travelled, the significantly less SAE presents (Fig. 2). The inverted-U-shape relationship between trip duration and stress toward other passengers (SOP) ( Table 2) nevertheless presents at the departure-flight stage (all: β = − 0.0004, p < .001), 1 supporting H1a. The calculated inflection point of 20.5 further suggests that when a leisure trip lasts for 20 days or less, the longer the anticipated trip leads to a more intensive SOP; however, when the trip stretches beyond 21 days, the longer length is associated with a lower SOP (Fig. 2). Interestingly, the stress about the regular airline/airport service delivery (SAS) (Table 3), only shows a linear relationship with trip duration at the return-flight stage; namely, the longer the experienced trip, the less SAS felt by travelers (all: β = − .01, p < .05) 1 . Hence, H1a-b can only be conditionally accepted, depending on stressor types.
Prior destination experience only has statistically significant effects on SOP (Table 2), and only at the departure-flight stage. The more times travelers have travelled to the same destination, the angrier they feel about fellow passengers (all: β = 0.13, p < .05). The identified valence of effects is also opposite to the hypotheses. H4a-b are rejected.
The increased number of travel companions only exacerbates SOP (all: β = 0.14, p < .05) at the return-flight stage and does not influence any stress types at departure-stage (partially supports H5b and rejects H5a) ( Table 2). Airport differences, in terms of airport status and country of airport, indeed appear as influencing air-stress levels. While airport status is not influential, country of airport is an important predictor of all stress types. Specifically, travelers at Brazilian airports as compared to those at American airports showed higher-level SAE (at the return-flight stage) (all: β = 0.22, p < .05) (Table 1), higher-level SOP (at departure-and return-flight stages) (departure-employed: β = .32, p < .01; return-all: β = 0.34, p < .01; return-employed: β = 0.25, p < .05) ( Table 2), and higher-level SAS (departure-flight stage) (all: β = 0.44, p < .05; employed: β = 0.28, p < .001) ( Table 3). The conditional support of H6a-b by stressors is found, while H7a-b are rejected.
Fig. 2. Quadratic Relationship between Trip Duration and Air-travel Stress
Dimensions (for all samples). 1 The effect is found as insignificant among the employed tourist samples.
The potential reasons could be but not limited to: a) the introduced covariate of job strains serve as a moderator, b) job strains cast a stronger main effect on the DV than this examined IV, or c) the smaller sample size for employed-only analyses. It needs additional studies to confirm the cause in the future. Hence no firm acceptance/rejection of the corresponding hypothesis among employed-only samples can be drawn. 2 Here the statistical significance is only found among the employed population, possibly because of a potential negative confounding effect by job strain. People taking more distant leisure trips are also likely those with less job strain and thus less spillover stress from work to vacation. Therefore, the effect from flight distance could appear after controlling for the ameliorating job-strain effect.
Conclusion and discussion
This study proposes a schema premised on Hobfoll's (1989) Conservation of Resources theory as a systematic and standardized approach to identify the potential influential factors of travel stress by analyzing their impacts on resource dynamics over the entirety of a leisure trip. Within the travel domain, the study focuses on the air-travel context for demonstrative purposes. Specifically, we identified the potential influential factors of air-travel stress over different air-travel stages out of a series of personal and situational factors readily available to airlines/airports (see summary of hypotheses results in Table 4). Specifically, the findings demonstrate the COR-identified influence variations by travel stages (Research Q1) and reveal novel patterns of cross-stage variations. Second, cross-stressor variations are identified (Research Q2). Third, all the COR-identified factors except for cultural distance, airport status and gender are established as influential to air-travel stress. These findings emphasize the importance of a context-based COR prediction and interpretation of influences on travel stress. The interpretation of the discrepancies from hypotheses further highlight the potential unique resource(s) for handling each stressor type and shed light on the interpretation of cross-stressor influence pattern differences.
Step Note. *p < .05, **p < .01, ***p < .001; Employment: Employed = 1, Currently not employed = 0; Gender: Female = 0, Male = 1; Airport country difference: USA = 0, Brazil = 1; Airport status difference: Domestic = 0, International = 1; Departure/Return Stage: Departure = 0, Return = 1. 3 The statistical significance is only established among the employed population, possibly because a potential negative confounding effect by job strain. The older age groups may have less job strain and thus less spillover stress from work to vacation, which counteracts the potential stress-intensifying effect from aging due to resource deficiency. Therefore, the effect from age could appear after controlling for the ameliorating job strain effect.
The proposed stressor-specific resource dynamics useful in explaining the revealed discrepancies from the hypotheses can be found in Table 4, corresponding to each hypothesis and stressor type.
Theoretical implications
This study advances the travel stress literature, Conservation of Resources theory, and travel stress methodology. To begin with, this study set out to establish a framework that holistically accounts for the complexity of the micro-, meso-, and macro-level influences on travel stress. We introduced the COR framework, adjusted it to the travel stress context, and further advanced it. By doing so, our study eliminates the theoretical black box between various levels of influential factors and travel stress by converting them into the resource consumption, conservation, investment, and gain, dynamics. This allows for a systematic assessment of personal and situational factors that can potentially affect travel stress at different trip stages and given various stressors.
Further, the current research for the first time proposes the necessity of multi-stage joint travel-stress analyses for improved accuracy. Namely, stress at a certain travel stage (e.g., upon-departure) should be assessed based on analyzing the potential influences on not only the focal stage but also the other travel stages, given the interconnected resource dynamics between stages. Besides, the transportation stages connect leisure travel and daily life and are critical in determining the overall stress level and well-being benefits of a leisure trip (Nawijn et al., 2010). The study thus contributes by illuminating their underexplored stress mechanisms.
We also extend the COR in three ways by refining its structure, further disclosing its hidden mechanisms, and revealing its unexploited potential in stress management. First, we push the COR boundaries and refine its structure by proposing an adapted resource typology from Hobfoll's (1989) predominant categorization of resources (personal, condition, object, and energy). Specifically, we focus on the resources changeable in the short term (physical, cognitive, affective, social, and dispositional) to understand the stress-shaping resource dynamics over a relatively short-lasting activity such as leisure travel. We suggest that this focused examination is better suited for evaluating temporary stress rather than chronic stress, where the original typology is mostly applied. By narrowing the scope and homogenizing each category, our study strengthens the practical value of resource conceptualization. The relatively homogeneous typology also allows for more meaningful between-context comparisons of resource dynamics. These findings could further be extended by exploring between-stage or between-setting differences in consumption patterns.
temporal resource evolvement patterns. Hobfoll et al. (2018) recently called for further advances of the COR regarding how resource types may interactively shape stress levels. Our results respond with an implied potential interaction between dispositional resources and foundational resources (i.e., physical, cognitive, affective, and social). Despite the dominant significance of dispositional resources in allocating foundational resources and determining stress levels (Halbesleben et al., 2014), the insufficiency of foundational resources can also limit the effectiveness of dispositional resources in coping with stressors. Our findings suggest a possible suppression of stress-alleviating effects from self-efficacy/empathy by a shortage of physical/affective resources exhausted over the trip. Further, this study adds to the COR literature by accounting for the role of time, per the request of Hobfoll et al. (2018). So far, only limited attempts have been made to integrate a temporal component to the use of resources (Halbesleben et al., 2014). This study goes beyond the early findings by establishing a potential curvilinear relationship between trip duration and resource sufficiency as indicated by stress levels. Specifically, a longer trip may benefit rather than exhaust the resource reservoir only when the length of the leisure trip exceeds a certain threshold. Thus, our findings respond to Hobfoll et al. (2018, p.114) to explore the roles time could play in resource dynamics, ranging "from the amount of time over which resources are lost or gained, to the length of recovery periods necessary to regain resources …". Future research may further extend on the findings by estimating how the trip length could affect resource consumption/restorage patterns at travel stages beyond air travel, such as the post-trip duration before the restored resources from vacation are depleted. Third, this study further unveils the unexploited potential and further the value of COR in guiding the stress management research. On one hand, existing COR literature has not explored how a certain activity can transition between resource losses and investments. We proposed a process of how resource losses can be transformed into resource investments via a setting that facilitates resource gains. For example, before flight departure a longer trip may seem to consume more cognitive and affective resources for travelers with high job strain. Yet once they experience the longer trip, it can turn into a resource investment because it facilitates gains of self-esteem and restorage of consumed resources. It enlightens a promising direction for alleviating travel stress by encouraging travelers to devote more resources to coping with travel stressors, given the resource consumption being an investment rather than loss. More importantly, this proposed angle suggests the still-underrated potential of COR in managing stress of different kinds in individual life, by pointing out the decisive factor of how resource consumption is construed as well as the promising role of activity context in facilitating or inhibiting the positive construal of resource consumption. On the other hand, this study proposes a direction that could further enhance the accuracy of COR-based stress analyses. It establishes the paramount significance for COR-based stress Table 3 The influences of personal/situational factors on air-travel stress toward regular airline/airport service deliveries.
social-oriented dispositional resource of trust (Chenet, Dagger, & O'Sullivan, 2010): --only when the trip becomes long enough (≥12 days), can the energy be significantly boosted, and sufficient selfefficacy be acquired over a trip to support the adequate resilience investment at the return-flight stage -only a long-enough trip (≥21 days) allows the anticipated during-trip gain of social resources (i.e., connection with important others/strangers) overshadowing their anticipated/ experienced consumption (i.e., group travel coordination), and boost empathy ( Tucker, 2016) -insignificance at the return-flight stage is related to consumed more empathy from social encounters/coordination effort) over a longer trip (Keller, Novembre, & Hove, 2014) -trust in air-service providers should have little relevance to the duration of trip not experienced yet, hence the revealed departure-flight insignificance; -experienced flight and diverse hospitality services over the trip provide a concrete base for building the trust (Bilgihan, 2016) ns.
-at the departure-flight stage, tourist should not consciously anticipate much potential gains/losses of dispositional resources (i.e., self-efficacy, empathy, and trust) as associated with the culture of the destination just yet (Gnoth & Matteucci, 2014), thus the irrelevance of cultural distance at this stage. -the actual experience of a culturally distant destination allows travelers to enjoy a sense of existential authenticity leading to self-esteem and self-efficacy gains, which yet are somewhat offset by the decline of energy, empathy, and trust due to the increased cultural distance (Bjørnstad, Fostervold, & Ulleberg, 2013) H3: Flight Distance ns. þ ns. þ þ ns.
-longer flight distance results in a greater likelihood of physical and emotional exhaustion, which further leads to a declined sense of self-efficacy ( Bandura, 2010) and accordingly lesser extent of return-flight resilience -The anticipated flight distance at the departure-flight stage, however, should not cast much influence on self-efficacy and ultimately on resilience -the emotional exhaustion from a longdistance trip can further limit the employment of affectively-demanding empathy by tourists (Passalacqua & Segrin, 2012) -the trust resource consumption is not closely related to geographical distance increases.
-resilience and its determinants of self-efficacy and energy, should at least be higher in upon-departure storage among repeated travelers to a destination (Karl, 2018). -due to destination familiarity, the earned self-efficacy may be negligible or inadequate to make a difference in coping with irregular adversities at the return-flight stage.
-the richer destination experience can result in a decline of anticipated gains of positive affection from the destination exploration, which then prompts repeated travelers to conserve than exhaust their affective resources in affection-demanding empathy offerings to other passengers (Passalacqua & Segrin, 2012).
-the extent of destination experiences may not affect much the trust in airline/airport services, given the possibly diverse airline brands serving a same destination. -the companion number should be less relevant to self-efficacy -empathy exhaustion before departure may be limited and can be compensated by the gained empathy-supporting social support travelling with a larger group ( Park et al., 2013).
-the companion number is less relevant to trust in airline/ airport services H6: Airport Location Differences ns. þ þ þ þ ns.
-it is likely attributed to the fast-growing purchasing power and resulted increasing travel demands in developing countries like Brazil, which imposes strains on their airport infrastructure and services to meet the growing demand (Lorenz, Johnson, & Barakat, 2017 -despite the greater sizes and operational complexities, international airport operations show comparative effectiveness in preventing and handling passenger stressors to domestic airports (Graham, 2018).
H8: Travel Frequency þ ns. ns. ns. þ -at the departure-flight stage, the greater self-efficacy is acquired from richer travel experiences ( Lepp & Gibson, 2008), and results in the higher resiliencethe upon-return physical exhaustion and deficit of energy, which similarly present among -the joint consumption of empathy in both workplace and frequent trips gives rise to employed tourists' empathy exhaustion and sensitivity to others' unpleasant behaviors at the departureflight stage. -at the return-flight stage, the employed tourists experience the deficit of affective resources, namely the raising -frequent travelers have richer travel knowledge and consume less information processing and judgment capacities over the trip (cognitive resources) than infrequent travelers. This allows spare cognitive resources to be deployed in evaluating airline/ (continued on next page) analysis to center on stressors. In addition to the common resources identified in the literature review that should affect air-travel stress regardless of stressors, this study denotes the existence of stressorspecific resource types (see Table 4). These resource types are indispensable to certain stressors and may exhibit a disproportional importance in shaping the corresponding stress reactions. Future research may conduct a stressor-based decomposition of resources for coping and rank those by importance to improve the accuracy of COR-based stress analysis. Taking a step further, the potential strategies facilitating the restorage or preventing the depletion of the identified most critical resource type(s) can be accordingly developed and experimented, for stress alleviation effectiveness when facing the corresponding stressor. For instance, whether an individual's stress level corresponding to the stressor of noisy crowds would significantly decline when this individual adopts various emotion regulation strategies to conserve the most critical coping support-the affective resources. This direction extends the scope of potential stress management strategies to be considered, which further broadens the scope of COR's contribution to stress management research. Finally, with regards to methodological advances, this study collects data from travelers at the gates while they are still experiencing airtravel stress. The data thereby captures multiple facets of air-travel stress. It covers not only the passengers' recalled stress that was experienced before arriving at the gate (e.g., fear of missing a flight), but also their stress reactions to the ongoing stressors (e.g., uncertainty with possible flight delay or airport crowdedness), as well as anticipated ones (e.g., potential insufficiency of luggage space or noisy fellow passengers). Both experienced and anticipated stressors trigger people's currently experienced stress levels (Spacapan & Cohen, 1983). Existing literature mostly only captures air-travel stress retrospectively, while travelers are surveyed either at the destination (Larsen et al., 2009;Reisinger & Mavondo, 2005) or after they already returned home based on experience recall (Chen, 2017;Chen et al., 2016;Deng & Ritchie, 2018). Even though a limited number of studies collect data directly from passengers at airports, these are still primarily retrospective and collected after the trip is completed (Batouei et al., 2019;Beck, Rose, & Merkert, 2018). In other words, those passengers should not feel the air-travel stress at the time of measure as the flight was already taken. In our study, participants are still waiting to take a flight, allowing us to capture the real-time felt stress toward the before-boarding process as well as the upcoming boarding and flight experiences. This rare and difficult form of data collection allows for more accurate measures of stress levels for improved stress analyses. Our study is also unique in the sense of distinguishing between departure and return stages, which enables the exploration of the potential variations of stress levels and sources of influences between stages, as proposed by existing literature (Chen et al., 2018).
Practical implications
By identifying the potential influential personal and situational factors of travelers' air-travel stress, this study provides airlines and airports with a direction to strategically and effectively manage the passenger stress levels through marketing, service delivery, and crisis management. Airlines and airports may generate profiles based on the sensitivity of certain traveler groups or travel contexts and create tailored programs or initiatives. As all these influential factors are (Fox et al., 2017).
-besides confirming the greater resource exhaustion related to age in leisure trips ( Kirillova et al., 2017), it found older population as having less empathy resource stored (Grühn & Scheibe, 2008), hence the greater likelihood of empathy exhaustion upon return.
-the insignificance could be due to the counteracting effects of older tourists' a) allocation of more cognitive resources to service quality evaluation based on greater value-conscious tendency (Sharma, Chen, & Luk, 2012), and thus the greater sensitivity to service pitfalls, and b) greater tendency to trust others/service providers ( Verhaeghe & Bracke, 2011) H12: Gender ns. ns. ns. ns. ns. ns.
-females overall gain greater extent of self-esteem from leisure trips than males (Kirillova et al., 2017), which is associated with a greater level of self-efficacy ( Hobfoll, 2011) that buffers the greater energy exhaustion and potential resilience deficit among females than males.
-females' before/during-trip consumption of physical/cognitive/affective resources (to offer social support to peers) may have cultivated a resource gain spiral that gains females more social support in return and thus no less empathy and trust in storage than males at air-travel stages Notes. SAS = Stress toward Airline/Airport service deliveries, SAE = Stress toward Adverse Events; SOP = Stress toward Other Passengers.
readily accessible in airline/airport databases, they provide a feasible and convenient approach for industrial stress management. The detailed profile of more sensitive passenger groups and contexts corresponding to each air-travel stressor and flight stage is listed in Table 5 (with factors ranked by their relative extent of influence on stress). Following the profiling, the potential resource dynamics underlying the identified influences on air-travel stress can further guide more effective service design and marketing initiatives for stress alleviation. Some example initiatives toward each group in Table 5 are recommended as follows.
First, passengers who are more susceptible to stress toward irregular adverse events (SAE) can enjoy a more relaxing flight experience with airlines/airports supporting their resilience resource. This can be done by demonstrating efforts preventing adverse events (stressor removal), providing clear instruction on what the passenger can do (self-efficacy support) or promising reliable assistance such as a compensatory night of stay at the airport hotel (social or energy support) if any uncontrollable adverse events indeed occur. Marketing promotions delivering assuring messages corresponding to the above aspects also better appeal to these travelers. The close monitoring of passengers sensitive to adverse events is also beneficial to crisis prevention and management, as it can help avert or rapidly spot passenger illness or extremely negative eWOM caused by travel stress.
Second, when it comes to enhancing flight experiences by cultivating a comfortable social environment, airlines/airports should be mindful about those passengers more sensitive to fellow passengers' unpleasant behaviors (SOP). They can potentially incorporate this factor in service design such as seat assignment and boarding order designation (stressor removal) or can potentially show more personalized care or empathy to these passengers (empathy support via social support). These people are also the more promising target markets for promoting services that can minimize social interference such as first-class cabins and priority boarding (i.e., monetary investment for gains of affective resources).
Third, to manage the fast-spreading tension about unsatisfied regular airline/airport service deliveries (SAS) and mistrust in an airline/ airport, the airline/airport can pay special attention to the service feedback of passengers who are potentially more sensitive to unsatisfactory service deliveries. If any service failure occurs, airlines/airports should offer social support by encouraging this group to voice concerns to their staff right away and provide a sincere and satisfactory recovery the sooner the better (e.g., with apologies, problem fixes, proper compensation, and value recognition of their feedback). These people may also be more attracted by a marketing message with a service guarantee to assure high service quality (stressor removal).
Limitations and future research
This study's limitations provide fertile ground for future research. First, we limited the consideration of potential resource dynamics shaping the air-travel stress to those with literature support. Additional resource dynamics may also arise upon the availability of new empirical evidence. Future studies can also develop a more fine-grained list of the specific resource types falling into each category of physical, cognitive, affective, social, and dispositional resources. Our primary goal is to establish a solid theoretical foundation that can guide the prediction of all-level influences (i.e., macro, meso, and micro) on leisure-travel stress of diverse stages/types. Future explorations may include the development and validation of measures for different resource types, quantifying the mediation effects of resource types connecting influential factors with travel stress, and identifying effective resource-supportive interventions for stress alleviation.
Also, the findings suggest that readily available factors alone explain less than 15% of the total air-travel stress variance, for which the collection of data on more in-depth factors may still be necessary to accurately predict the stress levels. Future studies may include additional variables, such as personality traits, stress levels in other aspects of life (i.e., work, family, health, etc.), relationships and travel experiences with companions, engaged activities during the trip, as well as the airline brand(s) of taken flights and connection duration. More samples with no current employment, younger than 20 years old, or travelling with a big group (10 companions or more) may further enhance the explanatory power (Appendix I). Future studies may also estimate existing variables differently. For example, the proxy of geographical distance (between the surveyed airport and travel destination) is only a rough estimate, due to the possibility of the current airport being a connection point. While the current data was collected in 2008 (before the financial crisis), more recent data collection (in the before-COVID regular settings) would have been preferrable. One advantage of that timing, nevertheless, is not being immediately after any major events (e. g., the September 11, 2001 attacks), which largely protects the generalizability of findings from any extreme and fluctuating short-term influences of major events. More importantly, a key contribution of this study is proposing and demonstrating the application of a COR-based schema for identifying shaping forces of travel stress. The proposed schema should be applicable to travel stress analyses under any contexts (i.e., given influences from micro-, meso-, or macro-level factors and regardless of the past, present, or the future). The identified influences on air-travel stress also offer enduring value, as our exploration on airtravel stress is based on stressors that persist through years and still commonly trigger passengers' stress reactions as of now. Therefore, the collection timing should not significantly limit the generalizability of our research findings. In fact, this study offers a baseline air-travel stress Note. The profile features are ranked based on their relative importance to predict the corresponding type of air-travel stress.
analysis for future research to conduct a similar study after the pandemic and compare, in order to identify to what extent the pandemic may have or not have significantly changed the demographic/trip-specific influences on air-travel stress and the associated resource dynamics.
In addition, the current study adopts a cross-sectional betweensubjects design in identifying the influences on air-travel stress. This can minimize the learning and transferring effects between departure-and return-flight stages (Caplan, Lane, & Grimson, 1995). Ideally a supplementary within-subject tracking of stress fluctuation (e.g., repeated cross-sectional design) can further reduce the random noise due to uncontrolled individual differences. Adding a stress measurement during the flight can also enhance the accuracy by measuring all sources of real-time stress, while the current study only captured the real-time stress related to airports as well as their anticipated stress toward onboard experiences, due to the challenges of onboard data collection. Future research may consider adopting experience sampling methods to improve the accuracy (Halbesleben et al., 2014).
Moreover, future research may evaluate stress as reactions to different stressors and at all trip stages (rather than the current focus on air-travel stages), and most importantly, measure the status of relevant resource types at each stage. This will be important to (1) establish the resource dynamics between stages that cause the stress fluctuations, and (2) identify the most critical resource types to travel stress alleviation, which can further inform the trip design that boosts leisure-travel benefits for well-being.
Finally, the current study essentially examines how the micro-level factors (i.e., personal and trip-focused situational factors) and stressors jointly affect resource dynamics resulting in travel stress. Future research can also introduce resource-gain facilitators (as opposed to the resource-consuming stressors, such as enhanced efficiency due to technology advances) and higher-level factors (i.e., meso-factors such as the air industry trend of reducing leg space, and macro-factors like political tension). In this sense, the proposed COR framework offers the potential for future research to bridge the individual experience with macro-level policy making.
Declaration of Interest Statement
None.
Impact statement
This study introduces a COR-adapted schema for travel stress analyses, demonstrated with air-travel stages. This schema for the first time allows a holistic and standardized exploration of the influences from factors at different levels-macro-level(e.g., economic/technological/ environmental advances/crises), meso-level(e.g., business policies/ strategies), or micro-level(e.g., individual life changes)-on stress levels across different travel stages. The understanding of the shaping forces of travel stress, their patterns of influences, and the underlying shaping mechanisms can guide the service and marketing design to effectively manage travel stress levels. This is critical to encouraging frequent travel, enhancing brand loyalty, and facilitating well-being from leisure travel. The holistic assessment of travel stress by accounting for resource dynamics from other travel stages also deems necessary to plan for the growing tourist expectation of seamless travel experiences. This study also advocates for fully capitalizing on existing tourist data in business/ industrial databases to achieve efficient industry stress management.
Contribution
Ye Zhang: Conceptualization, Data Analyses, Writing the initial draft, and Revisions, Jase Ramsey: Data Collection, Conceptualization, Data Analyses, Writing the initial draft, and Revisions, Melanie Lorenz: Conceptualization, Writing the initial draft, and Revisions. | 2020-10-16T13:07:21.623Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "5cdf3ee3e5ea5d2453a19d69f1389dbea6f224be",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.tourman.2020.104240",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc294ef50d5cae87a781f4f3efb45740e3c9fc34",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
27724761 | pes2o/s2orc | v3-fos-license | How Will the Mild Encephalitis Hypothesis of Schizophrenia Influence Stigmatization?
People diagnosed with mental disorders, particularly those with schizophrenia, are severely stigmatized (1, 2). The image of people with mental disorders is strongly influenced by the mass media, which are then influenced by the prevailing medical opinion as well as by current research results. Therefore, researchers in psychiatry bear a certain responsibility for the stigmatization of their very own research objects. Within the recent years, the mild encephalitis hypothesis receives more and more scientific interest. According to this hypothesis, a mild, but chronic, encephalitis underlies the symptoms of schizophrenia in a subgroup of patients. Infections, traumas, or autoimmune diseases can cause a mild encephalitis, which leads to psychiatric and/or neurological symptoms (3–5). Since the mass media have recently started to report about the association of brain inflammation and schizophrenia, the mild encephalitis hypothesis is starting to influence the public’s opinion about people diagnosed with schizophrenia, and thus will have a certain influence on the stigmatization. Whether it will increase or decrease stigmatization has not yet been investigated empirically. In the following, we discuss this question on grounds of theoretical concepts and empirical research on stigmatization of schizophrenia.
inTRODUCTiOn People diagnosed with mental disorders, particularly those with schizophrenia, are severely stigmatized (1,2). The image of people with mental disorders is strongly influenced by the mass media, which are then influenced by the prevailing medical opinion as well as by current research results. Therefore, researchers in psychiatry bear a certain responsibility for the stigmatization of their very own research objects.
Within the recent years, the mild encephalitis hypothesis receives more and more scientific interest. According to this hypothesis, a mild, but chronic, encephalitis underlies the symptoms of schizophrenia in a subgroup of patients. Infections, traumas, or autoimmune diseases can cause a mild encephalitis, which leads to psychiatric and/or neurological symptoms (3)(4)(5).
Since the mass media have recently started to report about the association of brain inflammation and schizophrenia, the mild encephalitis hypothesis is starting to influence the public's opinion about people diagnosed with schizophrenia, and thus will have a certain influence on the stigmatization. Whether it will increase or decrease stigmatization has not yet been investigated empirically. In the following, we discuss this question on grounds of theoretical concepts and empirical research on stigmatization of schizophrenia.
STiGMATiZATiOn OF MEnTAL DiSORDERS
Stigmatization is sociologically defined as the classification and stereotyping of people because of a negatively connoted attribute, together with segregation and loss of social status, discrimination in important contexts, and devaluation in a social hierarchy in a situation of exercise of power (6). Many stigmatized individuals internalize the negative evaluation, try to hide the negatively connoted attribute, and withdraw from society (self-stigmatization). Stigmatization often affects the social circle, particularly the families (courtesy stigma) (7).
Many biologically orientated researchers are convinced that biological explanations of psychiatric disorders will reduce stigma. This optimistic view is based on the attribution theory, assuming that the main reason for stigmatization is the attribution of guilt or responsibility for the onset and/or maintenance of the deviant behavior (8). Accordingly, biological, and particularly genetic, explanations should reduce blame against persons with mental disorders as soon as people understand that the strange or frightening behavior is not caused by evilness or weak will, but by a disease (9).
This conviction is contested by many social scientists. Because both the moral and the medical concepts assume an inborn predisposition for deviant behavior, a genetic explanation of deviant behavior does not diminish rejection (10). Genetic explanations assume mental disorders to be unchangeable, more serious, and hereditable (9,11). People convinced of "genetic essentialism" believe that the genes are a person's essence and that the characteristics and behaviors of a person are based on his/her genetic makeup (11). Genetic explanations increase self-stigmatization (12) and courtesy stigma, particularly the stigmatization of genetic relatives of people with mental illness (9). Furthermore, this approach supports a paternalistic attitude towards mentally ill persons, questioning their autonomy and decisional capacity (13).
The attribution theory and the concept of genetic essentialism are not mutually exclusive; rather they grasp different aspects of stigmatization: the first one mainly the attribution of guilt and the second mainly the fear and the feeling of social distance (10).
EMpiRiCAL RESEARCH On STiGMATiZATiOn OF MEnTAL DiSORDERS
Empirical research supports the theory of genetic essentialism and widely disproves the attribution theory for major depression and schizophrenia. For example, a representative study with 1,241 participants (9) confirmed only one prediction of the attribution theory, namely, that people who are convinced of genetic explanations pleaded for lesser punishments for violent behavior of mentally disordered persons. However, there was support for predictions based on the concept of genetic essentialism. People who assume genetic causes of schizophrenia believe in a greater seriousness, tenacity, and pervasiveness of the deviance and hold more social distance against the siblings of mentally disordered persons.
A systematic review of population-based studies found that biogenetic beliefs about the cause of schizophrenia or depression were associated with greater social distance and thus stronger stigmatizing attitudes (1).
Based on the aforementioned and further studies on stigmatization, we have hypothesized that several factors influence whether a given biological model of a given psychiatric disorder will increase stigmatization: (1) disease-specific factors and (2) model-specific factors (10).
(1) Disease-specific factors: biological explanations increase the stigmatization of a given psychiatric disorder, as soon as people think that this disorder is associated with (a) high dangerousness/unpredictability, (b) high psychosocial disability, (c) poor treatment success, and (d) high responsibility for the onset and/or offset of the disease. Among these factors, the most important one is the perceived dangerousness/ unpredictability, because this attribution leads people to seek social distance (2). (2) Model-specific factors: there are different models of psychiatric disorders are, e.g., psychosocial models, the genetic model, the neurotransmitter disturbance model, or the mild encephalitis hypothesis. Model-specific factors can modulate the effects of disease-specific factors in various ways. Modelspecific factors can influence the stigmatization, for example, the factor dangerousness/unpredictability either by changing the real dangerousness of people with this disorder or by changing the people's perception of the dangerousness. The first effect could take place if the model implied an effective treatment against psychosis and/or aggressiveness, the latter if the model convinced people that the disorder was not necessarily associated with dangerousness.
The differential effects of the model-specific factors might be contradictory. For example, genetic explanations of schizophrenia decrease the onset responsibility, but might squash hopes for successful treatments, at least in the laymen's perception.
Indeed, empirical research on the effects of different models on stigmatization has brought inconsistent results.
According to Rüsch et al. (12), the endorsement of genetic explanations was correlated with a stronger desire for social distance, whereas the endorsement of neurobiological explanations was not correlated with stigmatizing attitudes. In both cases, the attribution of responsibility was reduced.
According to Angermeyer et al. (14), the endorsement of a brain disease hypothesis is associated with increased anger and fear, which is associated with increased social distance. On the contrary, there was no significant association between the endorsement of hereditary factors and social distance, assumedly because the endorsement of hereditary factors increases on the one hand fear and on the other hand prosocial feelings.
In general, biological explanations of schizophrenia increase stigmatization, because schizophrenia has high degrees for three disease-specific factors (dangerousness/unpredictability, psychosocial disability, and poor treatment success). However, it remains an open question whether and in how far neurobiological explanations have a different effect on stigmatization as compared to genetic explanations. This situation is not only due to the inconsistent study results but also due to the rather crude biological explanations used in the studies.
AnTi-STiGMA MESSAGES
Accompanying research on stigmatization can contribute to a responsible psychiatric research that will not harm psychiatric patients by involuntarily increasing stigma. Empirical research on stigmatization of mental disorders is particularly necessary for communicating research results to the media and for designing anti-stigma campaigns which are not only well-intended but indeed beneficial for the concerned people. Since stigmatization is a multi-faceted phenomenon, interventions aiming at reducing stigma often have contradictory and unexpected effects.
According to a consensus paper on campaigns to reduce mental health-related stigma, the following message types should be used: (1) recovery-oriented, (2) "see the person, " (3) social inclusion/human rights, and (4) high prevalence of mental disorders (15). Additionally, information on the continuous nature of psychopathological phenomena is recommended for anti-stigma messages (16).
inFLUEnCE OF THE MiLD EnCEpHALiTiS HYpOTHESiS On STiGMATiZATiOn
We expect that the mild encephalitis hypothesis will have different effects on the stigmatization of schizophrenia. This hypothesis offers concrete hope for effective therapies with anti-inflammatory drugs for a subgroup of patients diagnosed with schizophrenia (17). Patients will probably accept these drugs better, so that their compliance will improve and the relapse rates might be reduced. With effective and potent drugs, many patients could be treated successfully, so that the dangerousness due to psychosis would vanish. Furthermore, their cognitive decline could be stopped, so that the level of cognitive functioning would be better. Diminished dangerousness and better cognitive functioning will positively affect on their social inclusion.
Because the mild encephalitis hypothesis contains no genetic determinism, but the concept of a genetic vulnerability, we expect that it will reduce the stigmatization of genetic relatives.
The mild encephalitis hypothesis might reduce the stigmatization further because it emphasizes the influence of infections and autoimmune disorders which can principally hit everyone, not only those with a special genetic makeup.
The mild encephalitis hypothesis might not influence the attribution of onset responsibility, because the patients are not responsible for any of the known causes of mild encephalitis. However, the attribution of offset responsibility might change significantly: if effective treatments without severe side effects were available, then the acceptance of the concept "liberty of illness" might diminish. People who refuse effective treatments will be considered as responsible for their enduring mental illness.
Finally, we expect that the stigmatization would be reduced significantly because the mild encephalitis hypothesis would support to shift the organizational authority over patients with schizophrenia from psychiatry to multi-disciplinary institutions combining psychiatry and neurology.
Therefore, we expect that the mild encephalitis hypothesis will contribute to a destigmatization of schizophrenia, of course particularly, if it will lead to effective drug therapies.
AUTHOR COnTRiBUTiOnS
SM and RR have both contributed to the article with regard to development of ideas. SM wrote the first draft of the manuscript and developed the structure of the paper. Both authors read and approved the final manuscript.
FUnDinG
This work was partly funded by the Federal Ministry of Education and Research of Germany (01GP1621A). | 2017-05-17T20:23:18.146Z | 2017-05-03T00:00:00.000 | {
"year": 2017,
"sha1": "316bf64e4dc1f7e9f79a003c43af98cf2bd91634",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2017.00067/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "316bf64e4dc1f7e9f79a003c43af98cf2bd91634",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
263774766 | pes2o/s2orc | v3-fos-license | m7GHub V2.0: an updated database for decoding the N7-methylguanosine (m7G) epitranscriptome
Abstract With recent progress in mapping N7-methylguanosine (m7G) RNA methylation sites, tens of thousands of experimentally validated m7G sites have been discovered in various species, shedding light on the significant role of m7G modification in regulating numerous biological processes including disease pathogenesis. An integrated resource that enables the sharing, annotation and customized analysis of m7G data will greatly facilitate m7G studies under various physiological contexts. We previously developed the m7GHub database to host mRNA m7G sites identified in the human transcriptome. Here, we present m7GHub v.2.0, an updated resource for a comprehensive collection of m7G modifications in various types of RNA across multiple species: an m7GDB database containing 430 898 putative m7G sites identified in 23 species, collected from both widely applied next-generation sequencing (NGS) and the emerging Oxford Nanopore direct RNA sequencing (ONT) techniques; an m7GDiseaseDB hosting 156 206 m7G-associated variants (involving addition or removal of an m7G site), including 3238 disease-relevant m7G-SNPs that may function through epitranscriptome disturbance; and two enhanced analysis modules to perform interactive analyses on the collections of m7G sites (m7GFinder) and functional variants (m7GSNPer). We expect that m7Ghub v.2.0 should serve as a valuable centralized resource for studying m7G modification. It is freely accessible at: www.rnamd.org/m7GHub2.
Introduction
Over 170 types of chemical modification are naturally decorated on cellular RNAs of all three kingdoms of life, modulating various biological processes such as translation, RNA stability and RNA metabolism ( 1 ,2 ).Among them, N7methylguanosine (m 7 G) is the most ubiquitous RNA cap modification added to the 5 cap at the initial stage of transcription ( 3 ).Recent studies suggested that m 7 G capping modulates nearly the entire life cycle of messenger RNA (mRNA), including mRNA splicing ( 4 ), translation ( 5 ), RNA processing and metabolism ( 6 ) and transcription ( 7 ), and influences various cellular processes including gene expression and transcript stabilization ( 8 ).Additionally, the presence of m 7 G modification in ribosomal RNA (rRNA) ( 9 ) and transfer RNA (tRNA) ( 10 ) has also been reported, and mutations that impair tRNA m 7 G methylation found to cause microcephalic primordial dwarfism ( 11 ).
We previously developed an integrated resource m7GHub to share data on m 7 G RNA modification in the human transcriptome ( 12 ).In the first release, m7GHub collected 44 058 experimentally validated human mRNA m 7 G sites and 57 769 m 7 G-associated variants, respectively .Additionally , 1218 m 7 G disease-relevant m 7 G-SNPs were further annotated, with implications for the potential pathogenesis of ∼600 disease phenotypes.
To date, several high-throughput sequencing techniques have been developed and applied for transcriptome-wide profiling of m 7 G RNA modification.The m 7 G-MeRIP-seq was first introduced in 2019 to profile m 7 G distribution in human and mouse transcriptome, respectively ( 13 ).This antibody-based immunoprecipitation technique reveals m 7 Gcontaining regions with a resolution ∼100 bp and has since been further applied to multiple species including rat and zebra fish (14)(15)(16).By combining the conventional MeRIPseq approach with ultraviolet cross-linking, m 7 G-miCLIP-seq achieved an improved resolution of ∼30 bp ( 17 ).In addition, base-resolution approaches such as m 7 G-seq ( 13 ) and m 7 G-MaP-seq ( 18 ) offer the precise location of m 7 G modification sites.Several overall patterns of m 7 G modification sites have also been reported across profiling techniques.Specifically, statistically significant GA-or GG-enriched motifs were identified in peaks using m7G-MeRIP-seq ( 13 ), while AG-rich contexts were reported from m7G-miCLIP-seq ( 17 ).Additionally, diverse sequence motifs around base-resolution m 7 G sites have also been reported by m7G-seq, with G(m 7 G)A and A(m 7 G)A ranking the top two motifs.Taken together, these findings suggested that additional methyltransferase(s) may be involved for m 7 G installation ( 13 ).Besides next-generation sequencing (NGS)-based methods, the newly emerged direct RNA sequencing platform developed by Oxford Nanopore Technology (ONT) also provides a promising alternative, allowing the simultaneous real-time identification of any natural modifications in the RNA molecule based on characteristic signals ( 19 ).Several pilot studies have offered specific or mixed identification of modified residues, such as m6Anet (m 6 A) ( 20 ), MINES (m 6 A) ( 21 ), nanoPsu (pseudourindine) ( 22 ), ELIGOS (mixed) ( 23 ) and Tombo (mixed).The ELIGOS and Tombo studies report a set of putative modified residues without differentiating the modification type, but these unknown types of candidate modification site can be further labeled using deep learning models.
In response to our rapidly expanding knowledge in RNA modification, bioinformatics databases have been developed to share, annotate and interpret the generated datasets.These bioinformatics efforts include: MODOMICS for querying RNA modification pathways ( 24 ); RMBase v.2.0 to collect of RNA modification sites ( 25 ); RMVar for unveiling RNA modification (RM)-associated variants ( 26 ); RM2Target for collection of writers, erasers and readers (WERs) of RNA modifications ( 27 ); m6A-Atlas as an m 6 A knowledgebase ( 28 ) and ConsRM for quantifying m 6 A conservation ( 29 ).However, to the best of our knowledge, resources for m 7 G-related knowledge are still limited to m7GHub.
In this study, we have upgraded m7GHub to version 2.0 by integrating all recently identified m 7 G RNA modification sites derived from NGS and ONT-based studies, from which m 7 G-affecting variants were revealed using a deep learning model.The m7GHub v.2.0 consists of the following major updates: (i) m7GDB: a comprehensive m 7 G database consisting of 258 206 NGS-based m 7 G sites and the first collection of 172 692 putative m 7 G sites derived from ONT samples with rich functional annotations, covering a total of 23 species.(ii) m7GDiseaseDB: a database holding the most complete collection of 156 206 m 7 G-associated variants that may add or remove an m 7 G methylation site, with 3238 diseaserelevant variants that may shed light on disease mechanisms acting through epitranscriptome layer circuitry.(iii) Enhanced modules allow interactive analysis of the database collections and user-uploaded datasets, from which putative m 7 G sites (m7GFinder) and epitranscriptome disturbance (m7GSNPer) of user-interested genome regions / genetic variants can be determined.The overall design of m7GHub v.2.0 is outlined in Figure 1 .We expect that m7GHub v.2.0 will be a valuable onestop platform for researchers who are interested in m 7 G modification: it is freely accessible at: www.rnamd.org/m7GHub2 .
Collection of m 7 G sites based on profiling techniques
The m 7 G sites collected in m7GHub v.2.0 were derived from both high-throughput sequencing (NGS) and Oxford Nanopore direct RNA sequencing (ONT) samples.Regarding NGS-based studies, the m 7 G sites were obtained from 74 sequencing samples using five different m 7 G profiling techniques.Additionally, 116 direct RNA sequencing samples, comprising 42 FAST5 and 74 FASTQ files, were collected from 37 independent studies in the NCBI GEO database (Supplementary Tables S1 and S2).Specifically, the collected m 7 G sites were classified into three different groups as illustrated next: i. NGS techniques (base-resolution) : the m 7 G sites classified in this group were extracted from NGS-based studies at base-resolution level.The genome coordinates of m 7 G residues were extracted from the relating GSE or corresponding supplementary files of m7G-seq and m7G-MaP-seq studies, respectively.For m7G-seq, we re-processed the raw sequencing data to map the baseresolution m 7 G sites to human genome assembly hg38, following the same protocol implemented in the original study ( 13 ).seq ( ∼150 bp) and m7G-miCLIP-seq ( ∼30 bp), respectively .Specifically , the m 7 G-containing regions from m7G-MeRIP-seq were obtained using a common pipeline.The raw FASTQ datasets were directly downloaded from NCBI Gene Expression Omnibus (GEO) ( 30 ), the raw reads were trimmed and aligned to the reference genome using HISAT2 ( 31 ), and peakcalling process was implemented by exomePeak2 ( 32 ).Besides m7G-MeRIP-seq, the genome coordinates of m 7 G-containing regions from m7G-miCLIP-seq were extracted from the supplementary files of its original study ( 17 ).iii.ONT-derived and deep-learning prediction : to try to unveil the landscape of m 7 G methylation generated by direct RNA sequencing techniques, we obtain the ONT-based m 7 G sites by large-scale prediction of modified guanosines using our previously developed deep neural network models ( 33 ).As no tools were available for specifically predicting m 7 G sites from direct RNA sequencing data, the Tombo and ELIGOS were used to screen out all non-canonical guanosines from direct RNA sequencing samples.Specifically, the raw FAST5 data were re-squiggled with the 'Tombo re-squiggle' module and candidate modification sites were detected by the 'Tombo de novo modification detection' module based on signal shifts.ELIGOS used the base calling errors (i.e.insertion, deletion, substitution and decreased base call qualities) caused by the presence of non-canonical bases.Raw FAST5 data were base called with Guppy and aligned to their reference genome with Minimap2.Then, ELIGOS extracted the base call error profile from the alignment SAM file and compared it with expected one.Sites with significantly higher errors were reported as potential modification sites.Consequently, Tombo and ELIGOS reported a set of putative modified guanosines without differentiating their modification type.The modified guanosines were further assessed by our previously developed neural network ( 33 ), trained on the NGS-validated m 7 G sites from four species (human, mouse, rat and zebra fish), respectively.Only the modified guanosines passing a strict cut-off (average prediction score > 0.5 and upper bound of Pvalue < 0.05) were retained as putative m 7 G sites and included in the m7GDB database.Evaluating the epitranscriptome impact of genetic variants on m 7 G methylation status In this study, two types of genetic variant were considered to assess their epitranscriptome impact on m 7 G methylation status.The germline variants were extracted from dbSNP (v151) ( 34 ), 1000 Genomes (Phase 3 Mitochondrial Chromosome Variants set) and Ensembl 2022 (Ensembl release 106) ( 35 ).In addition, 33 different cancer types of human somatic variants were collected from the Cancer Genome Atlas (TCGA) (release v.35) ( 36 ).Together, a total of 6 0826 918 germline variants and 2 264 915 somatic variants identified in four species were included, and the detailed datasets of genetic variants analyzed in this study can be found in Supplementary Table S3.
Following the well-defined definition for predicting m 7 Gaffecting variants in m7GHub and other related studies ( 26 ,37 ), an m 7 G-associated variant was characterized based on its ability to cause the gain or loss of an m 7 G modification site, as predicted by our previously described deep neural network models ( 33 ).Three different confidence levels were further defined: (i) high: a genetic variant directly altered an experimentally validated m 7 G site at base-resolution level (m7Gseq or m7G-MaP-seq), leading to the loss of the modified nucleotide; (ii) medium: a genetic variant altered a nucleotide within the 41-nt flanking window of a base-resolution m 7 G site or within an m 7 G-containing region ( ∼30-150 nt, identified by m7G-MeRIP-seq or m7G-miCLIP-seq), resulting in the loss of an m 7 G status in the mutated sequence, as determined by the deep learning model and (iii) low: the low confidence level covers the transcriptome-wide prediction for referenceand mutated-sequence (altered by a genetic variant) around guanosines, the significant decrease or increase in the m 7 G probability were reported by the deep learning model to define m 7 G-loss or m 7 G-gain mutation, respectively .Specifically , we calculated the association level (AL) between genetic variant and m 7 G site as follows: AL = 2 P SNP − 2 max ( 0 .5 , P W T ) for gain 2 P W T − 2 max ( 0 .5 , P SNP ) for loss Where the association level (AL) was calculated based on the probability of m 7 G methylation status for reference (wide type, P W T ) and mutated sequence (SNP altered, P SNP ) ranging from 0 to 1, with a value of 1 indicating the greatest epitranscriptome impact of the genetic variants on m 7 G status.The statistical significance was assessed by comparison to the ALs of all genetic variants, from which we use the upper bound of the P -value to represent the absolute ranking of each m 7 Gassociated variant.Only the variants with a P -value < 0.05 (within the top 5% ALs of all genetic variants) were retained in the database collection.
Functional annotation for m 7 G sites and m 7 G-associated variants
Functional annotations were integrated to help better interpretate the regulatory roles of the m 7 G epitranscriptome.The collected m 7 G sites and functional variants were first annotated with basic information such as gene annotation, transcript structure and predicted RNA secondary structure information ( 38 ).The potential involvement of posttranscriptional regulations was addressed with data collected from POSTAR2 ( 39 ) (RBP binding regions), miRanda ( 40 ) and startBase2 ( 41 ) (miRNA-RNA interaction), and UCSC browser ( 42) annotation (GT-AG splicing sites).In addition, the m 7 G-associated variants were annotated with mutation type (nonsynonymous or synonymous variant), TCGA barcode, RS ID, deleterious level (predicted by five independent scores (43)(44)(45)(46)).This information was derived from the AN-NOVAR package ( 47 ), dbSNP ( 34 ) and the TCGA database ( 36 ).
Potential involvement of m 7 G methylation in disease pathogenesis
A large number of disease-related variants (TagSNPs) were obtained from ClinVar ( 48 ), the GWAS catalog ( 49 ) and Johnson and O'Donnel's database ( 50 ).In addition, the TagSNPs were used to implement linkage disequilibrium (LD) analysis using PLINK ( 51) tool (parameters: -r2 -ld-snp-list -ldwindow-kb 1000 -ld-window 10 -ld-window-r2 0.8).The disease TagSNPs and their LD mutations were mapped to all m 7 G-associated variants to explore the potential pathogenesis of known disease-phenotypes through m 7 G regulation.
Database and web interface implementation
Hyper text markup language (HTML), cascading style sheets (CSS) and hypertext preprocessor (PHP) were used in the fundamental development of m7GHub v.2.0 web interfaces.We implemented MySQL and ECharts to present metadata and statistical diagrams, respectively .Additionally , the interactive exploration of user-interested genome coordinates were visualized by JBrowse genome browser ( 52 ).
Results m 7 G sites collected in m7GDB
The updated m7GDB database holds a total of 430 898 m 7 G sites (see bility were collected across 21 species at base-resolution level, such as human (76 077), mouse (13 828), fruit fly (298), pig (366), maize (8939) and Arabidopsis (3083).In particular, the m 7 G epitranscriptome in 20 species is covered for the first time, and data from direct RNA sequencing samples included.Compared to the previous version and other epitranscriptomic databases (RMBase ( 25 ), RMVar ( 26 ) and RMDisease ( 37 )), m7GHub represents the most comprehensive knowledgebase for collections of m 7 G methylation so far (Table 2 ).
Potential disease pathogenesis involving m 7 G disturbance (m7GDiseaseDB)
m7GDiseaseDB holds a total of 156 206 genetic variants that may add or remove m 7 G methylation status in four species (Table 3 ), including human (97 407), mouse (23 564), rat (7422) and zebra fish (27 813), providing the most comprehensive map of genetic factors potentially relating to m 7 G disturbance so far.To unveil the potential mechanisms of disease phenotypes functioning at the epitranscriptome layer, we then mapped all collected human m 7 G-associated variants to pathogenic TagSNPs and their LD mutations.We found that 3238 m 7 G-associated variants localized on 1651 genes were recorded with 1308 known disease phenotypes, which is nearly three times the number in the previous version.Additionally, 64 266 m 7 G-associated variants were also derived from TCGA cancer somatic mutations, revealing the potential involvement of m 7 G methylation in 33 types of human cancer.Finally, we identified the disease phenotypes and TCGA cancer types that are most strongly linked with m 7 G disturbance (Supplementary Table S4).
Enhanced web interface and usage
The web interface of m7GHub v.2.0 has been re-designed to present an informative, fast and user-friendly one-stop knowledgebase for m 7 G study, which enables users to quickly query, carry out customized searches of and freely download all collected datasets.Four major modules were presented in m7GHub, namely m7GDB, m7GDiseaseDB, m7GFinder and m7GSNPer.
m7GDB
The experimentally validated m 7 G sites were collected in m7GDB module.Users can visualize the landscape of m 7 G modification in different species according to the profiling techniques (Figure 2 of user interest (Figure 2 D).The returned results exclusively display m 7 G sites that satisfy all selected filter options (Figure 2 E): users can simply click on the site ID to access detailed information about a specific m 7 G site (Figure 2 F).
m7GDiseaseDB
The m 7 G-associated variants and disease associations were collected in m7GDiseaseDB (Figure 3 ), from which users can query each m 7 G-associated SNP with detailed annotations such as reference sequence, mutated sequence, relative position of SNP, potential involvement in post-transcriptional regulation (miRNA targets, RBP binding, splicing events), crosslinks to dbSNP / GtRNAdb and their epitranscriptome effects on m 7 G status (gain or loss function).The disease associations can be obtained by clicking 'GWAS' or 'Clin-Var' buttons from the filter columns.In addition, the 'Disease' option on the search box allows users to query all m 7 G-associated variants linking to a specific disease pheno-type, along with other search options such as gene symbol, genome coordinate and RS ID.Finally, the m7GDiseaseDB also offers various graphic visualizations that displaying the position of the m7G-SNPs along the gene and genomic regions of interest, such as Ensembl and UCSC genome browser.
Analysis modules (m7GFinder and m7GSNPer)
To allow users to perform interactive analyses on the collected datasets, two enhanced modules are presented based on our previously developed deep neural network models ( 33 ).The m7GFinder was developed for high-accuracy prediction of putative m 7 G sites from user-uploaded RNA sequences (standard FASTA format).A minimum sequence length of 41 nt is required as input data (Figure 4 A).The multi-instance learning framework treats each entire input sequence as a 'bag' and reports its bag-level label (m 7 G probability).Importantly, the m7GFinder reports the prediction label at the bag level (the entire input sequence), rather than a specific nucleotide (Figure 4 B).Consequently, each input sequence with a length around Users can further click the RM ID to access the basic information of the associated m 7 G-SNP and involved m 7 G site.The web-interface also features various graphic visualizations including Ensembl and UCSC genome browser, especially useful for presentation of SNP information.In addition, the disease associations in olv ed m 7 G methylation can be extracted by searching a specific disease or phenotype.
150 nt (typical length of m 7 G peaks from MeRIP-seq) is recommended.Besides m7GFinder, the m7GSNPer module allows users to evaluate the associations between SNPs of their interest and the m 7 G epitranscriptome of a specific species.The standard VCF file containing a group of genetic is acceptable as input data for m7GSNPer, with the association level (AL) was calculated reference and mutated sequences.The returned results of m7GSNPer can freely downloaded with detailed column explanations (Figure 4 C).
Batch download and API server
downloading options are provided for all datasets collected in v.
Discussion
With the rapid accumulation of sequencing samples derived from NGS and ONT technologies, comprehensive maps of m 7 G modifications under various biological contexts have been revealed.We have updated m7GHub to version 2.0, an all-in-one online platform designed to store, annotate, analyze and share the m 7 G data.Compared to the first release (m7GHub v1.0) and other epitranscriptome databases, our updated version covered so far the most comprehensive collections of m 7 G-related data refer In conclusion, m7GHub v.2.0 offers an extensive repository of m 7 G epitranscriptome data across various species.However, in the current version, the landscape of putative m 7 G modification from direct RNA sequencing samples was predicted by deep-learning model of modified guanosines, and thus only offers limited reliability.With the rapid advancement and widespread adoption of direct RNA sequencing techniques, we can expect the development of software to directly identify m 7 G modifications from direct RNA sequencing samples in the near future.Additionally, due to variations in the number of sequencing samples across different species, the m 7 G sites currently collected in the database cannot directly represent the overall distribution of m 7 G modification in a given species, especially for species with extremely limited sequencing samples available (e.g.yeast and E. coli ).Consequently, the database will undergo regular updates by continuously incorporating the latest sequencing data and methodologies to ensure it remains a useful resource for the m 7 G research community.
205 Figure 1 .
Figure 1.The overall construction of m7GHub v.2.0.The updated m7GHub v.2.0 consists of four major components: (i) m7GDB: the first m 7 G database containing ∼430 000 putative m 7 G sites collected from both NGS-and ONT-derived samples; (ii) m7GFinder: a deep learning-based high accuracy m 7 G predictor co v ering m 7 G identification in f our different species; (iii) m7GSNPer: a real-time analy sis module to assess the impact of genetic v ariants on database collection; (iv) m7GDiseaseDB: a database holding ∼150 000 functional variants involved in m 7 G modification, with implications for the potential pathogenesis of ∼1300 known phenotypes.An integrated web interface offers query, search, visualize and download function of all collected data is freely accessible at: www.rnamd.org/m7GHub2 .
Figure 2 .
Figure 2. Contents of m7GDB.( A and B ) The m 7 G sites collected in m7GDB were classified into three different group according to their profiling techniques; users can briefly check the statistical distribution of collected data summarized by pie charts.( C and D ) Several options were provided to further filter the datasets, including a position par to extract specific genomic region of interests.( E and F ) Once customized filtering has been applied, the user can click the site ID to view the detailed information of a specific m 7 G site.
D 209 Figure 3 .
Figure 3. Enhanced web interface of m7GDiseaseDB.User can query the collected 7 G-SNPs by selecting a species and c hec k the summary table.Users can further click the RM ID to access the basic information of the associated m 7 G-SNP and involved m 7 G site.The web-interface also features various graphic visualizations including Ensembl and UCSC genome browser, especially useful for presentation of SNP information.In addition, the disease associations in olv ed m 7 G methylation can be extracted by searching a specific disease or phenotype.
2.0.(i) Multiple datasets can be simultaneously selected for batch downloading on the 'Download' page.(ii) The application program interface (API) server vides a highly flexible download option: instructions and to access the API server are provided on 'API' page.
Figure 4 .
Figure 4. Contents of m7GFinder and m7GSNPer.(A) Web interface of m7GFinder.(B) Prediction results from m7GFinder.The m7GFinder reports the prediction label at the bag le v el (the entire input sequence), rather than a specific nucleotide.(C) Prediction results from m7GSNPer.The explanation for each column has been presented clearly, and the data is a v ailable f or free do wnload and sharing.
Table 1 .
Collection of m 7 G sites in m7GDB
Table 3
Note:The TCGA somatic variants were extracted from 33 different types of human cancer projects.The m 7 G-associated variants classified into high confidence level refer to mutations directly destroying base-resolution modified nucleotides (m 7 G site).The numbers in the 'ClinVar' and 'GWAS' sections represent the number of m 7 G-associated variants mapped to the disease-related TagSNPs having ClinVar or GWAS records, respectively.
Table 2 )
, including: (i) a comprehensive database (m7GDB) of 430 898 previously reported m 7 G sites, including the first collection of putative m 7 G sites ONT-derived samples, D 210 Nucleic Acids Research , 2024, Vol.52, Database issue | 2023-10-10T06:16:56.745Z | 2023-10-09T00:00:00.000 | {
"year": 2023,
"sha1": "f391bcfb15e0c0a570a0429640ae0cb759355647",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/advance-article-pdf/doi/10.1093/nar/gkad789/51956523/gkad789.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31df0ab2dcfafe48e94c4c87c0ec00d90038efb8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
111835881 | pes2o/s2orc | v3-fos-license | Analysis of inverter circuit of Sinusoidal Pulse Width Modulation with keystone waveform
Third harmonic is usually combined with the standard sinusoidal signal to increase the DC voltage utilization within the Sinusoidal Pulse Width Modulation (SPWM) based inventers. The inverter of SPWM with 3rd keystone waveform was analyzed and the one with which to get the best DC voltage utilization ratio was presented. Two inverters of SPWM with these two different waveforms were simulated in Matlab. And the conclusion is that the Total Harmonic Distortion (THD) of line voltage from the inverter of SPWM with keystone waveform is smaller than the one from the inverter of SPWM with 3harmonic.
Introduction
Pulse Width Modulation (PWM) is one of the most popular control methods in the full-controlled device circuits [1] and is widely used in lots of engineering areas [2,3]. There are many feedback schemes [4] and kinds of specific optimized circuits for PWM method.Different source signals are chosen according to the different requirements when PWM method is employed. When inverter concerned, the source signal is usually sinusoidal waveform. The PWM waveform whose pulse width varies by sine law is named Sinusoidal Pulse Width Modulation (SPWM) waveform. And the technology used to acquire the SPWM waveform is called SPWM technology. Its application in inverter circuit is well known [7]. Till now, it is the control scheme employed in vast majority of inverter circuits of small and medium power.
DC voltage utilization ratio denotes the ration between the output baseband amplitude of the inverter circuit and the input voltage across the collector and the emitter of IGBT. In high power circuits, higher DC voltage utilization ratio presents better economic value. For the normal three-phase SPWM inverter circuit, the DC voltage utilization radio is just √3/2 ≈ 0.866. This is mainly caused by the fact that the amplitude of sinusoidal modulation signal cannot exceed the one of the carrier.
In order to get the highest DC voltage utilization ratio and lower total harmonic distortion (THD) than those the normal modulation method can provide, we add a keystone waveform which triples the frequency of the baseband sinusoidal signal into it. Simulation shows that the design meets the requirement expected.
The rest part is arranged as following, in Section 2 we find the maximum DC voltage utilization ratio is 1 when triplefrequency signal is add into the baseband modulation signal. The parameters of keystone signal which can acquire the DC voltage utilization ratio 1 when added into the baseband modulation signal are presented in Section 3. In the last section, we compare the THDs of the line voltages of inverter circuits based on the 3 rd harmonic and triple keystone signal.
Analysis of sinusoidal waveform combined with triple frequency waveforms
In order to increase the utilization ratio of DC voltage, 3 rd harmonic is often added when the base sine waveform is modulated. And there exists 3 rd harmonic in the phase voltage of the output of the PWM inverter. When we combine the phase voltage into line voltage, the 3 rd harmonic will be eliminated and there is only sine waveform in the line voltage theoretically. Holmes [9] indicates that the modulation ratio of 3 rd harmonic added into the base sine signal is1/6 , if DC voltage utilization ratio 1 is needed.
Actually, the maximum DC voltage ratio is same for all the continuous waveforms which are symmetrical about the origin and triple frequency of baseband sine waveform. Let denote the frequency of the base sine waveform and = 2 . Assume that g(t) is some periodic continuous function of variable t and g(t) is symmetrical about the origin and the frequency of g(t) is 3f.
, we know that h(x) is symmetrical aboutπ/3 and h(π/3) = 0 since h(x) is continuous. So the point (π/3, √3/2) is on the combined waveform of base sine and h(ωt). Then the utilization ratio of DC voltage of the output line voltage is no more than 1 and can reach 1 when the parameters set properly.
Analysis of triple frequency keystone waveform
When a keystone waveform is modulated as the source signal in the inverter circuit, the strength is that the DC voltage utilization ratio can be around 1.1 while the weakness is there is many harmonics. In this section, we add triple frequency keystone waveform into the base sine single as modulation signal and present the parameters of keystone waveform which can make the maximum DC voltage utilization ratio.
Let h(x) denote a triple frequency keystone waveform which is symmetrical about the origin. σ is the triangulationrate and k is the height of the keystone waveform. Figure 1 shows the waveform of h(x) when x is between the origin and 2π/3.
in which L = π/3. From Section 2, h(x) + sin(x)must reach the maximum √3/2 when x = π/3 or π/2. After some simple calculation, we have parameters of h(x) to achieve the highest DC voltage utilization ratio, From above, modulation signal 2/√3(ℎ( ) + sin( )) can perform the DC voltage utilization ratio to be 1. It is drawn in Figure 2.
Comparison of THDs
Theoretically, there is only sinusoidal waveform in the output line voltage of inverters when the source modulation signal is the summation of the baseband sine and its 3rd harmonic. But the THD is not zero when we consider the hardware factors in the inverter circuits. In this section, we simulate two inverter circuits with Matlab/Simulink. These two circuits are both SPWM inverter circuits. The only difference between them is the modulation signal. The first one is 2/√3(sin( ) + 1/6 sin (3 )) , the other one is 2/√3(ℎ( ) + sin( )) in which h(x) is as (1)specified by k = 1 − √3 /2, = 12/ (1 − √3/2).Above analysis shows that the DC voltage utilization ratios of both inverter circuits are 1. We show that the THD of latter is smaller than the one of the former. Without loss of generality, a normal three-phase PWM inverter model shown in Figure 3 is used for the simulation. Fig.3 General three-phase inverter For the simulation, we set the Matlab/Simulink parameters as following, the frequency of base sine signal is 50Hz, the frequency of signal to be added is 150Hz, the frequency of carrier of PWM generator is 1650Hz, the input DC source is 1000v and the transformer is just an isolation one.
The simulation result is shown in Figure 4. Fig.4 Comparison of THD of line voltage with different signals After the inverter system stabled(t > 0.1 ), Figure 5, the enlargement of part of Figure 4, shows that the THD of the line voltage in the former inverter circuit whose modulation signal is 2/√3(sin( ) + 1/6 sin(3 )) is larger than the one in the latter circuit whose modulation signal is 2/√3(ℎ( ) + sin( )) in which h(x) is specified by k = 1 − √3/2, = 12/ (1 − √3/2).
Summary
In this paper, we discuss the SPWM inverter circuit in which different triple frequency signal is added into the base sine waveform. We achieve the conclusion that the maximum DC voltage utilization ratio is 1 when the modulation signal is the summation of the base sine waveform and some triple frequency signal symmetrical about the origin. We analyze the SPWM inverter whose base sine waveform is added with triple frequency keystone waveform and get the parameters of the added keystone waveform which can provide the DC voltage utilization ratio 1. With the general PWM inverter model, we simulate two different PWM inverter circuits. The modulation signal in the first one is the summation of base sine and its 3rd harmonic signal, and in the second one is the summation of base sine and keystone signal specified in Section 3. Simulation shows that the THD of line voltage in latter is smaller than the one in former circuit. | 2018-12-02T20:52:16.171Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "e26803631bd388888db3658e253396e8c73d45c5",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25846941.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e26803631bd388888db3658e253396e8c73d45c5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
139649775 | pes2o/s2orc | v3-fos-license | Preliminary study on the fabrication of cellulose nanocomposite film from oil palm empty fruit bunches partially solved into licl/dmac with the variation of dissolution time
This study aims to fabricate cellulose nanocomposite film from oil palm empty fruit bunches (OPEFB) and to compare the difference in resulted films, which were treated with various dissolution time. The method applied to generate cellulose fiber was steam explosion method; also, the alkali treatment was employed. In order to form cellulose nanocomposite film from cellulose nanofibers, LiCl/DMAc was used as the solvent with the dissolution time varied, namely 30 minutes and 60 minutes. The chemical structure was investigated using Fourier transform infrared (FT-IR) spectroscopy, whereas the surface morphology was done using scanning electron microscope (SEM). The FT-IR results show all appropriate vibration peaks to confirm that the cellulose nanofibers and nanocomposite film were successfully produced. From the photographs of the cellulose nanocomposite film as well as SEM photographs, it can be concluded that 60 minutes dissolution time gives smoother surface and more transparent film than those of 30 minutes.
Introduction
Oil palm is one of a promising commodity in Indonesia due to its use as crude palm oil (CPO). Thus, Indonesian Government extends their oil palm plantations to 10.955.231 hectares in 2014 with the total production of CPO of 29,344,479 tons (Direktorat Jenderal Perkebunan, 2014). Regarding the production of CPO, the residues are conduced, namely 23% of oil palm empty fruit bunches (OPEFB), 8% of the shell, 12% of fiber, 66% of liquid waste [1].
OPEFB itself has attracted researchers' attention nowadays because of its abundance, having a large cellulose content (45,9%), dan having a small lignin content (16,5%). Besides, it also has a considerable hemicellulose content (22,8%) [2]. Such compositions make OPEFB become a potential raw material that can be used as chemical feedstock, glucose, and even bioethanol. However, the use of OPEFB is still limited to fertilizer, alternative material for filling car seat cavity, mattress, briquettes, and raw material for paper production [3]. Therefore, the expansion to apply OPEFB is still needed. One of it is the utilization of OPEFB as a basic material for nanocomposite film.
In order to fabricate the nanocomposite film from OPEFB, some methods are applied. To date, steam explosion method is a proper method to be used. This method was found by Mason et al (1926) as the initial process of biomass processing. The processes employed are opening the fiber and making the 2 biomass polymer easy to be accessed for the next processes, namely fermentation, hydrolysis, and description [4].
Several researches have been done to isolate cellulose nanofibers from raw material using steam explosion method. Chaerin et al. (2010) isolated cellulose nanofibers from pineapple leaves fiber using steam explosion method [5]. The size of cellulose nanofibers obtained was 5-60 nm. Besides, Ghaderi et al. (2014) also conducted a research about the fabrication of all-cellulose nanocomposite from baggase to be applied as food packaging. Their research was done by employing LiCl/N,N-dimethylasetamide (LiCl/DMAc) as the solvent and steam explosion as the method. The nanocomposite produced had the maximum tensile strength of 140 MPa [6]. Similarly, Sinaga et al. (2018) also found that cellulose nanocomposite film from corn cobs can be generated using the same method and solvent as Ghaderi et al.
Related to the dissolution process using LiCl/DMAc, some researches also investigated the effect of various dissolution time. Sinaga et al. (2018) varied the dissolution time between LiCl/DMAc and cellulose nanofibers and found that longer dissolution time resulted in smoother film surface, they concluded that dissolution time affected the results of the film [7]. Similarly, Soykaebkaew et al. (2009) who fabricated bacterial cellulose nanocomposite using surface selective dissolution method stated that the increase of dissolution time of BC in the solvent resulted in smaller ribbons of bacterial cellulose, thus the surface become smoother [8].
To our best knowledge, there are only a few researches conducted the experiment about the fabrication of nanocomposite film from OPEFB, particularly about the effects of the difference in solvent contact time. Therefore, this study aims to examine the effect of different dissolution time in yielding cellulose nanocomposite film from OPEFB. The dissolution time used is limited to 30 minutes and 60 minutes only. The results were then characterized using Fourier Transform Infrared (FT-IR) to confirm the fruitfulness of the cellulose nanocomposite film synthesis. Also, scanning electron microscope (SEM) was employed to discover the difference in the morphology of the results.
Materials
Oil palm empty fruit bunches (OPEFB) were collected from the residue of oil palm plantation industry in Sei. Buluh, North Sumatera (Indonesia). Those OPEFB were then washed by using water, dried, chopped and stored at room temperature. The samples obtained were the primary sources in this study. The chemicals used, which are namely sodium hydroxide, acetic acid, sodium chlorite and oxalic acid, were puchased from Merck & Co. Several equipments were employed to conduct steam explotion: a homogenizer (DAIHAN HG-15D) and a laboratory autoclave (with 26 psi of pressure).
Isolation of ɑ-Cellulose from OPEFB
Raw fiber of OPEFB were chopped into short fibers (length: ±3-5 cm) and reacted with 2% w/v NaOH and left for 12 hours in a beker glass. Then, the sampel was put into an autoclave and kept under 26 psi of pressure and at a temperature of 130•C for 2 h. The pressure was released immediately after that process and the fibers were washed in water in order to gain neutral pH. Next, the bleaching process was applied by using H2O2 10% at 70ºC for 2 hours. The obtained cellulose fiber from OPEFB was then washed with distilled water. The ɑ-cellulose was filtered dried at 60ºC in an oven, and used for FT-IR investigation.
Isolation of Cellulose Nanofibers
The amount of ɑ-cellulose, which was treated with HCl 10%, was somcated for 2 hours. Suspension was then diluted with distilled water and allowed to settle over night. Next, the washing and filtration were 3 employed in order to obtain neutral pH. Then, the fibers were suspended in water and homogenized under continuous stirring with a homogenizer at 8.000 rpm for 4 hours. Suspension was then filtered and dried at 45ºC in an oven, and used for FT-IR and SEM investigation.
Preparation and Characterization of Nanocomposite Film
The dried cellulose nanofibers was reacted with DI-water, acetone dan N,N dimethylacetamide (DMAc) for 1 hours at room temperature, followed by dissolution process with LiCl/DMAc 8% for 30 and 60 minutes (the time applied was called as dissolution time). Next, the samples was poured into the mold and dried with hydraulic press equipment for 1 hour at 70ºC. Then, the cellulose nanocomposite was yielded and used for FT-IR and SEM investigation.
Characterization
The dried nanofibers of OPEFB were mixed with KBr powder and examined using FTIR (Shimazu-IR Prestige 21) unit with scanning region of 4000-500 cm-1 at 16 cm-1 resolution and averaging of 45 scans. The same terms were applied to cellulose nanofibers and cellulose nanocomposite film. The morphology of the surface of cellulose nanocomposite film was discovered by using scanning electrone microscope (SEM) (Bruker) with the magnification 500 times under 10.00 kV of voltage. The photograph of each sample stage was taken using 8 megapixels camera.
Chemical composition of fibers
The chemical composition of OPEFB is summarized from all stages of treatment. In the raw form of OPEFB (fiber), there are other components besides cellulose, hemicellulose and lignin, namely pectin, wax, moisture content, etc (Figure 1(a) and 1(b)). When the fibers are exposure to alkali treatment, the hemicelluloses were removed. Besides, the steam explosion treatment also decreased the proportion of the hemicellulose and lignin (Figure 1(c)). However, the remainder of hemicellulose and lignin still existed. It can be erased by bleaching treatment, obtaining almost pure cellulose, so the samples are more suitable for extracting whiskers (Figure 1(d)), which is called as ɑ-cellulose. Homogenizing was performed to obtained smaller size of fiber, called cellulose nanofibers (Figure 1(e)). After the cellulose nanocomposite film obtained from different dissolution time (30 minutes and 60 minutes), there is a brightness difference in their appearances that can be seen from Figure 1(f) and 1(g), as later confirmed by SEM analysis. It suggests that 60 minutes of dissolution time gives better appearance of the film. Thus, the dissolution time play a substantial rule in forming the nanocomposite film besides the solvent used. Related to the forming of cellulose fiber from OPEFB using steam explotion method, some explanation are revealed. According to Fernandez et al.(1999) in alkali treatment followed by steam explosion, the hemicelluloses are partially hydrolyzed; also, the lignin is depolymerized. Thus, sugars and phenolic resin compounds, which are partly soluble in water, increase [9]. In addition, Xiao et al.(2001) mentioned that during that process, there are a damaged in alkali-labile linkages between lignin monomers, or between lignin and polysaccharides. Carboxylic or phenolic groups as acidic molecules are ionized in alkaline solution, so they might boost the solubilization of the lignin [10]. It occurred because of hydrolysis treatment using hot alkali [11].
Fourier Transform Infrared (FT-IR) Spectroscopy Analysis
The FT-IR spectra spectroscopy is carried out to discover the alteration of the chemical structures of the fibers from OPEFB after the treatments using steam explosion method as well as the difference in the structure for various dissolution time. Those spectras are shown in Figure 2. As shown in Figure 2, cellulose, cellulose nanofibers, cellulose nanocomposite film (30 and 60 minutes) signified two main regions of absorbance; at low wavelengths (500-1750 cm−1) and at higher wavelengths (2800-3500 cm−1). This result is in accordance with the study done by Abderrahim et al. (2015) who measured the commercial cellulose and Lani et al.(2014) who obtained the cellulose from OPEFB [12]. As can be seen that there is no vibration peak observed at around 1700 cm−1 from all spectras due to the removed of hemicelluloses, which is showed by C=O stretching, or the disappeared ester carbonyl groups in the p-coumaric units of the lignin. This is believed to be caused by the alkali treatment [13]. Besides, Figure 2 also depicted the movement of wavenumber from cellulose nanofibers to cellulose nanocomposite film, which indicated the effect of dissolution using LiCl/DMAc, The similar situation was also found by Sinaga et al.(2018). To be concluded, these results show that the isolation of cellulose as well as the cellulose nanocomposite film from OPEFB have been done successfully.
Furthermore, from the analysis of those four FTIR spectras, the absorption peaks located in the ranges 3300-3500 cm−1 are assigned to the hydroxyl group (-OH), whereas those, which range from 2890 cm−1 to 2900 cm−1, are attributed to and aliphatic saturated C-H stretching vibration [14]. In addition, stretching of -C-O-group of secondary alcohols and ethers functions, which exist in the cellulose chain backbone, was reflected by vibration peaks in the region 1040-1070 cm−1 and the absorption band at 894.97 cm-1 is characteristic of β-glycosidic linkage between glucose units [12].
However, there is a peak appeared at 1635.64 cm-1 for all spectras. Related to that, Le Troedec et al.(2008) explained that the mentioned peak was carried out by the reaction between sodium hydroxide 6 and the hydroxyl groups of celluloses, subsequently the water molecules were formed [15]. As also described by Abraham et al.(2011), even though the drying process was applied, there was still the existence of water absorption in the cellulose molecules, which is difficult to be removed because of the cellulose-water interaction [16].
In addition, the cellulose nanocomposite film obtained using 30 minutes shows similar pattern to that of 60 minutes, even though the vibration peaks resulted are a bit different each other as can be seen from Figure 2(c) and 2(d), but those peaks are still in the same region and assigned to the same groups. However, from the percentage of transmission obtained show that the sample of 60 minutes gives higher transmission than that of 30 minutes, this confirmed that the nanocomposite film of 60 minutes dissolution time gives a more transparent film.
Scanning Electron Microscope (SEM) Analysis
SEM was performed to discover the surface morphology of the cellulose nanofibers and the cellulose nanocomposites film produced. As depicted in Figure 3(a), the surface of cellulose nanofibers is rough and having pores in some areas, whereas the surface of cellulose nanocomposites (Figure 3(b) and 3(c)) is smooth. This occurred due to the dissolution process using LiCl/DMAc (Sinaga et al.(2018)). In addition, the surface of nanocomposite film from 60 minutes dissolution time pointed a smoother surface than that of 30 minutes. It is in agreement with Sinaga et al.(2018), the longer dissolution time, the smoother surface yielded. This result confirms the photograph results in Figure 1(f) and 1(g).
Conclusion
From the results obtained, It can be concluded that the cellulose nanofibers and nanocomposites film can be successfully generated from oil palm empty fruit bunches (OPFEB) by using alkali treatment and steam explosion method. The FT-IR spectras show no vibration peaks of lignin and hemicellulose, as also confirmed by the finding of other studies. The LiCl/DMAc as a solvent to form the cellulose nanocomposites film was properly used with the variation of dissolution time, namely 30 minutes and 60 minutes. The cellulose nanocomposite film of 60 minutes gives smoother surface and more transparent film than those of 30 minutes. There is no difference in vibration peaks found between the two. Therefore, those results suggest that the research to compare longer dissolution time and their mechanical properties is needed to be done in the future. | 2019-04-30T13:08:51.384Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "22fae1d3a7bcf0b907e5b36db0f046e11d42b2f8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1116/4/042012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "caaf220a6362ad1a6516ef965534db9358e9b58f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
126007964 | pes2o/s2orc | v3-fos-license | Decoherence and the measurement problem
The problem of measurement taken at face value shows clearly that there is an inconsistency inside the quantum formalism. The phenomenon of decoherence is often presented as a solution to it. A widely debated question is to decide between two different interpretations. The first one is to consider that the decoherence process has the effect to actually project a superposed state into one of its classically interpretable component, hence doing the same job as the reduction postulate. For the second one, decoherence is only a way to show why no macroscopic superposed state can be observed and so, to explain the classical appearance of the macroscopic world, while the quantum entanglement between the system, the apparatus and the environment never disappears. In this case, explaining why only one single definite outcome is observed remains to do. In this paper, we examine arguments for and against both interpretations and defend a position according to which the outcome that is observed is relative to the observer in close parallel to the Everett interpretation. Frontiers of Fundamental Physics 14 FFP14 15-18 July 2014 Aix Marseille University (AMU) Saint-Charles Campus, Marseille
The measurement problem
As is well known the measurement problem comes from the fact that inside the quantum formalism, there are two contradictory postulates for computing the evolution of the state of a system.The first one is the Schrödinger equation: ℏ |Ψ⟩ = |Ψ⟩ which is supposed to be used for an isolated system when no measurement is performed on it.The second one is the reduction postulate which says that when a measurement of a certain observable A is made on a system which is initially in a state that is a superposition of eingenstates of A, |Ψ⟩ = ∑ | ⟩, then after the measurement, if the result is the eigenvalue of A λk, the state |Ψ⟩ is projected onto the eigenvector | ⟩ linked to this eigenvalue ∑ | ⟩ → | ⟩ or onto the sub-space of the Hilbert space that is spanned by the eigenvectors linked to it, if λk is degenerated.It is of fundamental importance to realize that these two ways of describing the change of the state of a system are not at all compatible.The Schrödinger equation describes a linear and unitary process while the reduction postulate is neither linear nor unitary.That means that there is a priori no way one can get a reduction of the state through the Schrödinger equation.That should not come as a problem if it was possible to give a clear and not ambiguous definition of what a measurement is.In this case, we would have two well separated situations, a first one when no measurement is made on the system and a second one when a measurement is made.In the first case, we should apply the Schrödinger equation and in the second one, the reduction postulate.It is nevertheless worth noticing that there is a difference in the way it is possible to use these two rules.The Schrödinger equation can be used without any further knowledge on the system.All is necessary to compute the future state at an arbitrary time is the initial state |Ψ⟩ and the Hamiltonian H of the system.Using the reduction postulate is more demanding as it requires the knowledge of the result provided by the measurement.That could seem harmless as it is the case for example in classical statistical physics when one has only a probability distribution on the possible states of the system and when one updates the state when learning in which one of the possible states the system really is.But in quantum physics, this statistical interpretation is totally ruled out and hence, the very fact that the knowledge of the result is necessary lies at the core of the problem of the impossibility to state the quantum formalism without any reference to an observer (whatever at this stage an observer could be).Now the problem is that it is impossible to define clearly what a measurement is!What is at stake is a definition that could be regarded as "strongly objective" in the meaning that d'Espagnat gave to this term [1] (i.e.without any mention to a human observer).Indeed, the Copenhagen interpretation, mainly proposed by Bohr, says that a measurement is an interaction between the system and a macroscopic classical apparatus 1 .Since a physicist using a macroscopic apparatus for measuring a physical property on a system perfectly knows what he is doing, there is no ambiguity and the reduction postulate must be applied.Even if this point of view is generally working in practice, it leaves open the question of knowing what a macroscopic apparatus is.The distinction micro / macro is not a sharp one and assuming naively that a macroscopic system behaves classically is problematic since we know of many macroscopic systems showing a quantum behavior (super conductivity, super fluidity etc.).Another difficulty with this view is that the quantum formalism is assumed to be universally valid and should be applied to any physical system, whether microscopic or macroscopic.A careful analysis of Bohr's position shows nevertheless that what he had in mind was not that the classical behavior of the apparatus was directly linked to its macroscopic aspect but that it was linked to the use that the observer wanted to make of it.If the observer wants to make a measurement on it through another apparatus, it has a quantum behavior.If it is used to make an observation on another system, it has a classical behavior 2 .In this case, it is clear that the role of a human observer in the definition of the measurement process can't be avoided and that this definition can't be considered as "strongly objective".The notorious analysis given by von Neumann of a measurement process seen as an interaction between the system and an apparatus shows that two opposite and irreconcilable descriptions seem equally valid.Assume that a measurement of a certain observable is made and that {| ⟩} is a basis of the Hilbert space built from the eigenvectors of this observable associated with the associated eigenvalues λi (that for the sake of simplicity, we assume to be non-degenerated).Let the system S be in a state | ⟩ = ∑ | ⟩ and an apparatus A in the initial state | 0 ⟩.Then, before they interact, the state of the system -apparatus is the tensorial product: The interaction between S and A is done through a Hamiltonian H AS operating during a short time.It is assumed that the apparatus is built in such a way that if the system is in the state | ⟩, the apparatus will be in the state | ⟩ after the measurement whatever its initial state.Now, there are two ways to describe the process.The first one is to consider that the system-apparatus is an isolated global system on which no measurement is made and to use the Schrodinger equation.This gives: The second one is to consider that a measurement is made on the system S and that a value λk is found.The reduction postulate gives: ) Both description seem equally valid though they lead to totally different states.Equation (2) shows that the system and the apparatus are at the end in an entangled state.In particular, this is to be interpreted as if the apparatus was in a state which is a superposition of states linked to different possible results of the measurement 3 .For example, if the system is a spin ½ particle and the apparatus a detector with a needle such that a spin up along Oz leads to a position of the needle pointing up (and a spin down, a position of the needle pointing down) then equation ( 2) leads to a state of the needle that is a superposition of positions up and down.Of course, no such macroscopic superposition has ever been observed.If we add another system (even a cat) to the initial system and the apparatus, it becomes entangled as well with the first two.This is the core of the celebrated Schrodinger's cat argument.
Firstly proposed solutions
Faced to what seems a real inconsistency inside the quantum formalism, physicists have proposed many solutions.A first category of solutions considers that even if the quantum formalism has a physical universal validity, the consciousness of the observer lies outside of its scope.More precisely, a measurement is made when the consciousness of an observer has an interaction with a system and the interaction between the system and the consciousness of the observer has the physical effect to change the state of the system and to project it according to the reduction postulate.This is the position of Wigner [2] and London and Bauer [3].Of course, this particular physical effect of a consciousness on a physical system is not very satisfying.
Another possibility is to modify the Schrödinger equation in a way that allows getting a reduction when a measurement is done but preserving of course the current predictions when no measurement is made.The most famous attempt in this direction has been done by Ghirardi, Rimini and Weber [4] (G.R.W. formalism) who add a term in the Schrodinger equation which then describes an evolution from a pure case to a proper mixture.A third possibility is to accept more radical changes by switching to hidden variables theories like the Bohm theory [5] in which the measurement problem doesn't exist.
We don't have time here to examine the pros and cons of these various modified theories and will restrict our analysis of a more recently proposed solution which stays inside the pure quantum formalism.The solution came from a remark from Zeh [6] that no system is really totally isolated.Hence it is necessary to take the environment into account.The decoherence theory is nothing else than the description of the way to take the interaction between the system, the apparatus and the environment into account inside the quantum formalism.We shall first describe briefly the technical framework in which the decoherence theory is usually stated.
The density matrix formalism
The density matrix formalism has been invented for being able to deal with statistical mixtures of systems being in different pure states, as in classical statistical mechanics.The density matrix of a system in a pure state | ⟩ is: = | ⟩⟨ |.For the sake of simplicity, let's take an example in a two dimensional Hilbert space with a basis (| 1 ⟩, | 2 ⟩): )then the density matrix in this basis is: In It is important to notice that no individual system can have a diagonal density matrix with more than one non null element.Indeed, as we have seen, if the state of such a system is a superposition of vectors spanning a basis of the Hilbert space | ⟩ = | 1 ⟩ + | 2 ⟩ then the density matrix is in this basis: Whereas if the state of the system is one of the vector of the basis (for example | 1 ⟩) then the density matrix is: On can see that in neither case the density matrix can be: Of course this is true for any space of higher dimensionality.So, If we are not in presence of a set of systems initially in different states, such a density matrix describes inevitably what d'Espagnat has called [1] an improper mixture and not a proper mixture.This will be important in the following to show that the decoherence process (that leads to a diagonal density matrix) is not sufficient to explain the reduction of the state vector.
The role of the environment
Following Zeh's remark, Zurek [7] proposed the following mechanism to explain the reduction.Let's analyze now the measurement process as we did previously but let's take the environment into account and consider a big system composed of the initial measured system plus the apparatus plus the environment. = | ⟩⟨ | (12) After the interaction, according to the Schrödinger equation: = ∑ | ⟩| 0 ⟩| 0 ⟩ → ∑ | ⟩| ⟩ | ⟩ (13) As previously, we can assume a two dimensional space without loss of generality (and let c1,2 = , ).In the basis ( | 1 ⟩| 1 ⟩| 1 ⟩, | 2 ⟩| 2 ⟩| 2 ⟩) we have, similarly to equation (9): Apparently nothing has been gained!In the basis of the Hilbert space which is the tensorial product of the Hilbert space of the system plus the apparatus plus the environment, the density matrix has exactly the same form than before.But the key point comes from the remark that we can't perform measurements on all the degrees of freedom of the environment because that would require apparatuses that are totally out of reach.
The quantum formalism prescribes in this case that the density matrix of the sub system SA formed by the initial system and the apparatus is given by the partial trace on the degrees of freedom of the environment of which can be computed as: Now, it is possible to show that in general the coefficient () decreases towards 0 very
PoS(FFP14)223
rapidly.So: This density matrix looks like the density matrix of the equation ( 7) that is a statistical mixture and no more like a superposed state.So it seems that each system belonging to the set of systems described by () has now a definite state corresponding to one of the eigenvectors of the observable that has been measured.This is the reason why many authors (including Zurek in his first paper) thought that the decoherence process allows to explain in an objective way the reduction of the state vector.
Is the measurement problem solved?
Solving the measurement problem would mean that, independently of any observer, the initially superposed state of the system has been reduced to a definite state.Now, it's easy to see that it is not the case.First of all, the final diagonal form of the density matrix is the result of the partial trace of the global density matrix and the reason why this partial trace can be done is entirely due to the fact that it is acknowledged that no measurement of the environment is possible for the observer.It gives the correct predictions provided that the observer won't do any measurement on the environment.That means that the final diagonal form of the density matrix is the form it takes for an observer with limited means of measurement.Hence, it is not an objective reduction.The second reason is that the small non diagonal terms that have been considered as null (() → 0) are actually not rigorously null and can even become again big after a (very) long time.
Another important point to notice is that even though the density matrix looks like the density matrix of a statistical mixture, it is actually the density matrix of an improper mixture.An improper mixture is composed of systems that are all identical.So it is not correct to interpret the decoherence process as leading to a set of systems each having a definite state in a proportion given by the diagonal coefficient of the matrix.The similar form of a matrix got through a partial trace and of the matrix of a proper mixture doesn't allow to assimilate an improper mixture with a proper one.This is particularly visible when one considers only an individual system as we noticed above.As Bell insisted [8], the correct interpretation should be that each system is in a state where all the possibilities are simultaneously present.This is the celebrated "and /or" difficulty.These reasons show clearly that the decoherence process can't be considered as solving the measurement problem at all.Nowhere is it said what exactly a measurement is and even after decoherence, it is still necessary to use the equivalent of the postulate of reduction and the Born rule to predict what will be observed and with which probability.What the decoherence process brings is actually an explanation of the classical appearance of the world, provided we use the standard recipes to compute.It explains why we (human observers) can't observe any macroscopic superposition and why what we see is conform to the classical description of the world.But the underlying reality (if there is any) remains in a superposed and entangled state.If this is taken literally, that means that the reduction postulate is nothing but a convenient and practical way to describe the observations but doesn't correspond to any real physical process.Now, the standard recipes to compute assume that we know what a measurement is.It is when a measurement is made that the probability of finding (observing) a specific result is given by the corresponding diagonal element of the density matrix.But nowhere inside the formalism of
PoS(FFP14)223
decoherence it is said what a measurement is.We are left with the initial problem!
What is a measurement?
Inside a modified theory such as the G.R.W. formalism which changes the Schrodinger equation, a measurement is done through a specific interaction and can be a physical process acting on the state of the system.But if we stay inside the pure quantum formalism, it seems that there is no other way to define a measurement than to say that it is when an observation is made by an observer and even, when a conscious mind becomes aware of the result.The ultimate point is when somebody knows what the result is!That looks like the old proposal of London, Bauer and Wigner.But, if we take the decoherence process into account, there is now a big difference with their position.They thought that the reduction that occurs during a measurement was a physical process through a real action of the mind on the system, the mind changing the state of the system.
It is now possible to defend a much less shocking position.The reduction is no more a physical process but merely the fact that when a conscious mind makes an observation what it sees is described by the diagonal density matrix which states that no superposition is visible.That doesn't mean that the state of the system is physically reduced (actually the system remains in an entangled state with the apparatus and the environment) but that what the conscious mind sees can only be conform to a classical appearance as the reduction postulate prescribes.This is reminiscent of the Everett interpretation which says that there is no reduction of the state and that the wave function of the whole universe remains superposed.
The convivial solipsism
In the Everett interpretation, there is no reduction (the physical world remains in a superposed state) but the observer is divided in as many observers as there are branches (which is not very economical).
If | 0 ⟩ is the initial state of the observer and | ⟩ is the i th state: We propose another interpretation [9] (extending a first analogous model proposed by d'Espagnat [1]) through a Hanging up mechanism.There is only one observer and one universe but the consciousness of the observer hangs up to one branch.Once the consciousness is hung up to one branch, it will hang up only to branches that are daughters of this branch for all the following observations.That guarantees: -That repeating the same measurement will give again the same result -Any conflict with another observer is impossible Assume for example that the system is a spin half particle in a superposed state along Oz.Now, the hanging up mechanism says that the consciousness of the observer chooses one branch at random (respecting the Born rule linked to the coefficients of the linear combination).
PoS(FFP14)223
Hence: | 0 ⟩ -> |> or | 0 ⟩ -> |> (21) So, even if the universal superposed wave function is not reduced, for all subsequent measurements for this observer everything happens as if the wave function was reduced to either Nevertheless, this observer continues to physically exist in the other branches even if he can't be conscious of what happens in these branches.That is a sort of solipsism because the consciousness of each observer is located inside its own branch independently of the others.But that is convivial since no conflict is possible.The quantum rules and the hanging up mechanism for each observer prevent any possibility to notice a divergence between the perceptions of two different observers.There is another striking consequence that is a strange answer to the famous phrase of Einstein: God doesn't play dice.In this case, Einstein was right, God doesn't play dice but you do!This is so because the random aspect of the quantum predictions comes, not from the fact that the physical systems changes at random (the dynamic of the Universe is fully deterministic) but from the random way your consciousness chooses the branch to which it hangs up!
SThis form of the density matrix is analogous to the classical case of a statistical mixture of a proportion p1 of systems in the state |φ 1 ⟩ and a proportion p2 of systems in the state |φ 2 ⟩. | 2019-04-22T06:43:18.233Z | 2016-09-15T00:00:00.000 | {
"year": 2016,
"sha1": "5971d01e3c41fe5ab1416a7f31c9c446e82c42f6",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/224/223/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5971d01e3c41fe5ab1416a7f31c9c446e82c42f6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
202034505 | pes2o/s2orc | v3-fos-license | Synthesis of Fe (OH) 3/g-C3N4 catalyst by a simple two-step method and application in organic pollutants degradation
The photo catalyst Fe (OH)3/g-C3N4 was successfully prepared by a simple method. Degradation performance of BP-5R by Fe(OH)3/g-C3N4 and g-C3N4 was compared, the results showed that the removal rate of BP-5R by Fe(OH)3/g-C3N4 can reached 93.9% at 1.0h, which was 2.87 times higher than that of the pure g-C3N4. The prepared samples were characterized and analyzed using a series of characterization methods. It was found that the enhancement of photocatalytic performance was mainly due to the transfer of photoelectrons to the Fe (OH)3. At the same time, the main active species in the process of photocatalytic degradation were analyzed. The results demonstrated that the active species including h+ and •O2− are predominant role in the photocatalytic reaction.
Introduction
In recent years, more and more industrial synthetic organic compounds have been widely used in the world. However, excessive use has led to a wide variety of organic pollutants entering natural water bodies, and caused incalculable harm to the ecological environment and human health. Among many organic pollutants, industrial synthetic dyes occupy a certain position. They are widely used in textile industry, leather industry and printing industry. The produced wastewater contains a large amount of organic pollutants, which can remain in the environment and difficult to completely degrade in a short time. Commonly used methods for treating wastewater include coagulation sedimentation method, adsorption method [1], biological method [2] and photocatalytic degradation method [3]. Among these methods, photocatalytic degradation method has the advantages of high degradation efficiency and without secondary pollution. At the same time, photo catalyst has excellent regenerability and can convert solar energy into chemical energy [4]. Which can solve the environment problem, at the same times can effectively deal with the current situation of energy shortage.
The low activity and high preparation cost of the catalyst limit the wide application of photo catalyst in practical engineering. Therefore, it is particularly important to develop a catalyst with low cost and high catalytic activity to solve the existing environmental and energy problems. Graphite carbon nitride (g-C 3 N 4 ) is a potential new catalyst that can effectively solve environmental and energy problems, due to its advantages of visible light response, simple preparation method, low price of raw materials and controllable electronic properties [5]. But the drawback of g-C 3 N 4 is rapid recombination of photoinduced electron-hole pairs during photo catalysis [6]. In previous studies, researchers have tried different methods to modify the material itself to enhance its photocatalytic performance. In the paper, graphite carbon nitride was prepared by thermal polycondensation method, and then the as-prepared carrier was fully mixed with ferric chloride, Fe(OH) 3 was uniformly loaded on the surface of g-C 3 N 4 by a precipitation method, and the composite Fe(OH) 3 /g-C 3 N 4 was successfully prepared.
Preparation of catalysts
Firstly, 20g of melamine was placed in a muffle furnace for heat treatment at 550℃ for 3.0 h, the obtained solid was ground to powder to obtain g-C3N4. Secondly, 5 g g-C3N4 and 125mL of FeCl3 solution (0.1mol/L) were fully mixed for 2 h, then 125mL of NaOH solution (0.2mol/L) was added, and the mixture was fully stirred at room temperature for 4 h until the mixture was completely precipitated.
After filtrated, washed and dried at 105 ℃ for 24 hours, and the prepared material was designated as Fe (OH)3/g-C 3 N 4 .
Characterization of catalyst microstructure and chemical properties
Figs. 1(a) and 1 (b) are SEM images of as-prepared catalysts. From Fig. 1(a), a lamellar structure can be observed, and the surface of the material is very smooth, which is a typical g-C 3 N 4 structure. After Fe (OH) 3 was loaded on its surface, obvious small blocky particles can be observed on the surface ( Fig. 1(b)), the introduction of Fe(OH) 3 can significantly increase the specific surface area of the catalyst. XRD pattern of g-C 3 N 4 and Fe (OH) 3 /g-C 3 N 4 are shown in Fig. 2. From the pattern, it can be observed that the two materials have common characteristic peaks at 2θ=12.8 and 27.6, which are mainly attributed to the structure of carbon nitride, the two peaks correspond to the (100) and (002) crystal planes of g-C 3 N 4 . The (100) crystal plane is associated with an in-plane structure packing motive, and the intense diffraction peak located at 27.5°, which is corresponds to the aromatic ring structure in g-C 3 N 4 , indicating that the crystal structure of g-C 3 N 4 has not changed after modification. Compared with g-C 3 N 4 , new characteristic peaks appeared at 2θ=35.6, 41.9, 51.2 and 63.8° in the XRD pattern of Fe (OH) 3 /g-C 3 N 4 (JCPDS Card 22-0346). Suggesting the Fe (OH) 3 particles was successfully loaded on the surface [7].
Study on Photocatalytic Performance and Mechanism Analysis
The catalytic performance of different photo catalysts was compared by degradation of BP-5R. The reaction was carried out in a solution with BP-5R concentration of 10mg/L, and the dosage of photo catalyst was 20mg. Before the photocatalytic reaction was carried out, it was first stirred in the dark for 0.5h [8]. Photocatalytic reaction was carry out in 500W Xe lamp, and samples are taken every 10min to obtain the residual concentration of BP-5R in the solution, and the obtained data are fitted with quasifirst-order kinetic model. From Fig. 3(a), it can be observed that the concentration of dye in the solution hardly changes after the reaction in the dark for 0.5h, which indicates that the prepared material has poor adsorption capacity. When the reaction is transferred to visible light, the concentration of dye in the solution can be quickly reduced. After 1.0h photocatalytic reaction, the removal rates of g-C 3 N 4 and Fe (OH) 3 /g-C 3 N 4 to BP-5R can reach 32.7% and 93.9%, respectively. Which is 2.87 times higher than that of g-C 3 N 4 , after modified the photocatalytic performance is significantly improved [9]. Fig. 3(b) is the result of fitting the data in the photocatalytic reaction process with the kinetic model. From the picture, it can be observed that the fitting degrees of the two materials are 0.996 and 0.943, respectively, and the calculated rate constants are 0.00636 and 0.4346, respectively. Fig. 4(a) is the PL spectrum of the as-prepared photo catalyst, which measured with the excitation wavelength of 375nm. From the spectrum, it can be observed that the maximum fluorescence emission peak of g-C 3 N 4 exists at about 450nm, which corresponds to the band gap where photogenerated e --h + combine. The diffraction peak of Fe(OH) 3 /g-C 3 N 4 is red shift to about 462nm, and the diffraction peak intensity of the composite decreased, indicating that the modification of the material can reduce the recombination of e --h + [10]. In order to study the main active species in the photocatalytic degradation of BP-5R by Fe (OH) 3 /g-C 3 N 4 , sodium oxalate, p-benzoquinone (BQ) and isopropyl alcohol (IPA) are used as scavengers. The experimental results are shown in Fig. 4(b). It can be observed from the picture that Fe (OH) 3 /g-C 3 N 4 has a removal rate of 93.9% for BP-5R at 60min when without scavenger added. When sodium oxalate was added, the removal rate was 68.9%. When IPA was added, the degradation rate was basically unaffected. When BQ was added, the removal rate was only 21.5%. Therefore, it can be considered that the main active species in the degradation process are h + and •O 2 -, and the possible degradation mechanism of BP-5R was proposed. Under the irradiation of visible light, photo-generated electrons generated by the photo catalyst transition from the VB to the CB, and a part of photo-generated electrons can combine with O 2 to generate •O 2 -, which can reactor with BP-5R. The photo-generated electrons on the conduction band will transfer to Fe (OH) 3 , which can more effectively inhibit the recombination of e --h + , which is the main factor for enhancing the photocatalytic performance of the composite material. Combined with the two active species can rapidly remove BP-5R in solution.
Conclusion
Graphite carbon nitride was successfully prepared by thermal polycondensation method, and then Fe (OH) 3 was successfully loaded on g-C 3 N 4 by precipitation method. The removal rate of the prepared Fe (OH) 3 /g-C 3 N 4 to BP-5R was 2.87 times than that of g-C 3 N 4 . The enhancement of photocatalytic activity is attributed to Fe(OH) 3 loaded can effectively inhibit the recombination of e --h + and increase the active species in the photocatalytic reaction process. Experiments show that the active species including h + and •O 2 play a major role in the photocatalytic reaction process. This study provides a theoretical basis for the wide application of g-C 3 N 4 . | 2019-09-10T00:27:54.812Z | 2019-08-09T00:00:00.000 | {
"year": 2019,
"sha1": "ce27e0a78c7b75e25992cf8b52e3ca5dd9bbc0a5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/300/3/032101",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3174527e20adfcc99fc66f164466fe7fa1abc3d7",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.